Feb 19 08:20:55 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 08:20:55 crc restorecon[4769]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:55 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:56 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:20:57 crc restorecon[4769]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 08:20:57 crc kubenswrapper[4780]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.668255 4780 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673443 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673463 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673468 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673473 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673477 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673482 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673487 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673491 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673495 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673499 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673504 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673511 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673515 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673520 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673524 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673529 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673533 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673538 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673543 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673547 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673551 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673555 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673558 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673562 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673566 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673569 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673573 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673578 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673582 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673585 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673589 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673592 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673596 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673600 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673604 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673608 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673611 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673615 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673618 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673622 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673626 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673629 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673633 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673636 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673639 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673643 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673646 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673650 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673653 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673657 4780 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673661 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673665 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673671 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673675 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673679 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673683 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673690 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673694 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673698 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673702 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673706 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673710 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673714 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673717 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673721 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673724 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673728 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673732 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673735 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673739 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.673743 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673845 4780 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673854 4780 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673862 4780 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673867 4780 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673873 4780 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673878 4780 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673884 4780 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673889 4780 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673893 4780 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673898 4780 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673903 4780 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673909 4780 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673913 4780 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673918 4780 flags.go:64] FLAG: --cgroup-root="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673922 4780 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673926 4780 flags.go:64] FLAG: --client-ca-file="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673930 4780 flags.go:64] FLAG: --cloud-config="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673937 4780 flags.go:64] FLAG: --cloud-provider="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673941 4780 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673946 4780 flags.go:64] FLAG: --cluster-domain="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673951 4780 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673956 4780 flags.go:64] FLAG: --config-dir="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673960 4780 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673965 4780 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673971 4780 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673975 4780 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673980 4780 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673984 4780 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673989 4780 flags.go:64] FLAG: --contention-profiling="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673993 4780 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.673997 4780 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674001 4780 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674006 4780 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674011 4780 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674015 4780 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674019 4780 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674023 4780 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674028 4780 flags.go:64] FLAG: --enable-server="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674033 4780 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674039 4780 flags.go:64] FLAG: --event-burst="100" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674043 4780 flags.go:64] FLAG: --event-qps="50" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674048 4780 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674052 4780 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674056 4780 flags.go:64] FLAG: --eviction-hard="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674061 4780 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674066 4780 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674070 4780 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674075 4780 flags.go:64] FLAG: --eviction-soft="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674079 4780 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674084 4780 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674089 4780 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674093 4780 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674097 4780 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674101 4780 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674105 4780 flags.go:64] FLAG: --feature-gates="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674110 4780 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674114 4780 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674141 4780 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674148 4780 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674154 4780 flags.go:64] FLAG: --healthz-port="10248" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674159 4780 flags.go:64] FLAG: --help="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674165 4780 flags.go:64] FLAG: --hostname-override="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674169 4780 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674175 4780 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674180 4780 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674184 4780 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674188 4780 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674192 4780 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674197 4780 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674201 4780 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674205 4780 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674210 4780 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674214 4780 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674218 4780 flags.go:64] FLAG: --kube-reserved="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674223 4780 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674227 4780 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674231 4780 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674235 4780 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674240 4780 flags.go:64] FLAG: --lock-file="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674244 4780 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674248 4780 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674253 4780 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674259 4780 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674265 4780 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674272 4780 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674276 4780 flags.go:64] FLAG: --logging-format="text" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674280 4780 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674285 4780 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674289 4780 flags.go:64] FLAG: --manifest-url="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674294 4780 flags.go:64] FLAG: --manifest-url-header="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674300 4780 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674304 4780 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674309 4780 flags.go:64] FLAG: --max-pods="110" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674313 4780 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674318 4780 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674322 4780 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674326 4780 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674330 4780 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674334 4780 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674339 4780 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674349 4780 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674353 4780 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674358 4780 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674362 4780 flags.go:64] FLAG: --pod-cidr="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674366 4780 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674375 4780 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674379 4780 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674383 4780 flags.go:64] FLAG: --pods-per-core="0" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674387 4780 flags.go:64] FLAG: --port="10250" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674392 4780 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674396 4780 flags.go:64] FLAG: --provider-id="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674400 4780 flags.go:64] FLAG: --qos-reserved="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674404 4780 flags.go:64] FLAG: --read-only-port="10255" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674408 4780 flags.go:64] FLAG: --register-node="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674416 4780 flags.go:64] FLAG: --register-schedulable="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674421 4780 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674428 4780 flags.go:64] FLAG: --registry-burst="10" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674432 4780 flags.go:64] FLAG: --registry-qps="5" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674436 4780 flags.go:64] FLAG: --reserved-cpus="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674441 4780 flags.go:64] FLAG: --reserved-memory="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674447 4780 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674451 4780 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674455 4780 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674459 4780 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674463 4780 flags.go:64] FLAG: --runonce="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674468 4780 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674472 4780 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674476 4780 flags.go:64] FLAG: --seccomp-default="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674480 4780 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674484 4780 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674488 4780 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674493 4780 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674497 4780 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674501 4780 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674506 4780 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674510 4780 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674514 4780 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674518 4780 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674523 4780 flags.go:64] FLAG: --system-cgroups="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674527 4780 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674533 4780 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674537 4780 flags.go:64] FLAG: --tls-cert-file="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674541 4780 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674546 4780 flags.go:64] FLAG: --tls-min-version="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674550 4780 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674554 4780 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674561 4780 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674565 4780 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674569 4780 flags.go:64] FLAG: --v="2" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674575 4780 flags.go:64] FLAG: --version="false" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674581 4780 flags.go:64] FLAG: --vmodule="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674586 4780 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.674590 4780 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674684 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674689 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674694 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674698 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674702 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674706 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674710 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674713 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674717 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674720 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674727 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674730 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674734 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674737 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674741 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674745 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674748 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674752 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674755 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674759 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674762 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674766 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674769 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674773 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674777 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674782 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674786 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674790 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674794 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674798 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674802 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674805 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674809 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674813 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674816 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674820 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674824 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674827 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674831 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674834 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674838 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674842 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674852 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674855 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674859 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674862 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674866 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674869 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674873 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674876 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674880 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674883 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674887 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674890 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674894 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674898 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674903 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674908 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674912 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674916 4780 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674919 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674923 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674926 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674930 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674933 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674937 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674940 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674944 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674947 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674951 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.674955 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.675910 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.688873 4780 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.688939 4780 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689072 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689088 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689096 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689105 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689113 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689121 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689171 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689178 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689186 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689195 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689202 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689209 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689215 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689222 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689229 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689238 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689249 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689256 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689263 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689271 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689278 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689285 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689293 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689301 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689311 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689320 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689328 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689336 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689343 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689350 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689357 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689364 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689374 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689384 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689395 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689403 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689411 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689418 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689425 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689432 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689440 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689447 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689454 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689461 4780 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689468 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689475 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689482 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689489 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689496 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689503 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689510 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689517 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689527 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689534 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689541 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689550 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689557 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689564 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689573 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689582 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689589 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689597 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689606 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689616 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689624 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689631 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689638 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689645 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689652 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689660 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689670 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.689682 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689925 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689942 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689949 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689957 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689966 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689974 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689980 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689987 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.689994 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690001 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690008 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690015 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690022 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690029 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690037 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690045 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690052 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690059 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690066 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690073 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690080 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690088 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690094 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690101 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690108 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690116 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690145 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690152 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690159 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690167 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690177 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690188 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690196 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690204 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690214 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690224 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690232 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690240 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690248 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690256 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690264 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690271 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690278 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690285 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690292 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690300 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690308 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690315 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690322 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690329 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690337 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690346 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690357 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690366 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690374 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690382 4780 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690390 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690397 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690406 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690415 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690423 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690431 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690439 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690445 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690452 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690459 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690466 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690474 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690482 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690491 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.690501 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.690514 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.690806 4780 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.696221 4780 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.696414 4780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.698318 4780 server.go:997] "Starting client certificate rotation" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.698355 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.700989 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-07 13:00:55.183872583 +0000 UTC Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.701201 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.730800 4780 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.735824 4780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.737439 4780 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.755372 4780 log.go:25] "Validated CRI v1 runtime API" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.802943 4780 log.go:25] "Validated CRI v1 image API" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.805258 4780 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.813247 4780 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-08-16-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.813311 4780 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.841409 4780 manager.go:217] Machine: {Timestamp:2026-02-19 08:20:57.838868535 +0000 UTC m=+0.582526034 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:acb2587c-96a2-4752-8cc0-31f3ec66dc5a BootID:b0e95c9e-0cc6-4df2-aa05-74b171e9d33d Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ab:e5:f5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ab:e5:f5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:55:4d:73 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:be:02:c0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:29:a3:08 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a9:6a:ed Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:c8:1b:45 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:a4:9c:c2:30:08 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:90:57:55:80:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.841781 4780 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.842053 4780 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.842552 4780 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.842861 4780 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.842929 4780 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.843303 4780 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.843322 4780 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.843857 4780 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.843910 4780 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.844179 4780 state_mem.go:36] "Initialized new in-memory state store" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.844307 4780 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.850031 4780 kubelet.go:418] "Attempting to sync node with API server" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.850074 4780 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.850118 4780 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.850173 4780 kubelet.go:324] "Adding apiserver pod source" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.850244 4780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.854280 4780 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.855447 4780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.857611 4780 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.857591 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.857604 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.857746 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.857759 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859629 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859669 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859683 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859697 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859720 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859733 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859747 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859768 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859784 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859798 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859828 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.859843 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.861015 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.861675 4780 server.go:1280] "Started kubelet" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.861967 4780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.863257 4780 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.863935 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:57 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.864436 4780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.868290 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.868479 4780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.869108 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:22:07.094076539 +0000 UTC Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.869304 4780 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.869374 4780 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.869701 4780 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.869051 4780 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.870667 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.870881 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.870914 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.876768 4780 factory.go:153] Registering CRI-O factory Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.876972 4780 factory.go:221] Registration of the crio container factory successfully Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.877378 4780 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.877585 4780 factory.go:55] Registering systemd factory Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.877746 4780 factory.go:221] Registration of the systemd container factory successfully Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.877926 4780 factory.go:103] Registering Raw factory Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.878100 4780 manager.go:1196] Started watching for new ooms in manager Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.877178 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895981822435fe1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:20:57.861636065 +0000 UTC m=+0.605293544,LastTimestamp:2026-02-19 08:20:57.861636065 +0000 UTC m=+0.605293544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.880256 4780 manager.go:319] Starting recovery of all containers Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.880735 4780 server.go:460] "Adding debug handlers to kubelet server" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.903211 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.903823 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.903867 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.906925 4780 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907013 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907053 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907077 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907099 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907120 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907195 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907220 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907251 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907283 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907316 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907351 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907383 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907406 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907432 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907453 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907506 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907527 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907547 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907568 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907592 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907615 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907636 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907658 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907717 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907740 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907762 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907818 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907845 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907869 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907891 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907911 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907932 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.907978 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908002 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908023 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908065 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908086 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908708 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908802 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908834 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908875 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908907 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908937 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.908975 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909010 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909188 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909271 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909309 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909339 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909414 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909452 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909484 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909513 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909543 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909572 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909614 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909649 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909682 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909733 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909764 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909796 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909828 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909855 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909886 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909915 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909942 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.909970 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910005 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910035 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910063 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910090 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910117 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910182 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910211 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910254 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910285 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910309 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910339 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910369 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910397 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910423 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910450 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910477 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910502 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910526 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910552 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910591 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910633 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910660 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910723 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910762 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910791 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910822 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910851 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910879 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910908 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910934 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910962 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.910995 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911029 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911061 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911184 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911218 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911250 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911279 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911310 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911340 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911371 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911421 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911465 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911494 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911523 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911550 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911577 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911603 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911634 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911666 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911693 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911721 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911749 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911778 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911806 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911839 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911867 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.911893 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912003 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912034 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912083 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912110 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912172 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912232 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912263 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912290 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912316 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912344 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912386 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912412 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912495 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912576 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912607 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912634 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912691 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912778 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912800 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912820 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912840 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912900 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912924 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912953 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.912981 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913011 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913038 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913081 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913109 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913239 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913265 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913290 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913342 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913369 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913397 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913424 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913475 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913542 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913632 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913662 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913691 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913723 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913752 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913779 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913809 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913965 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.913998 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914020 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914067 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914086 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914222 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914261 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914287 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914339 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914389 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914415 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914442 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914468 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914505 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914622 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914662 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914776 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914811 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914852 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914907 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914933 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.914984 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915015 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915043 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915110 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915221 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915251 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915301 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915350 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915388 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915446 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915491 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915533 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915570 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915599 4780 reconstruct.go:97] "Volume reconstruction finished" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.915650 4780 reconciler.go:26] "Reconciler: start to sync state" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.926055 4780 manager.go:324] Recovery completed Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.933429 4780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.936763 4780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.936841 4780 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.936886 4780 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.936979 4780 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 08:20:57 crc kubenswrapper[4780]: W0219 08:20:57.937578 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.937862 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.948964 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.951757 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.951819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.951837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.954474 4780 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.954520 4780 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.954564 4780 state_mem.go:36] "Initialized new in-memory state store" Feb 19 08:20:57 crc kubenswrapper[4780]: E0219 08:20:57.971380 4780 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.982100 4780 policy_none.go:49] "None policy: Start" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.983840 4780 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 08:20:57 crc kubenswrapper[4780]: I0219 08:20:57.983876 4780 state_mem.go:35] "Initializing new in-memory state store" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.037254 4780 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.047716 4780 manager.go:334] "Starting Device Plugin manager" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.047868 4780 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.047893 4780 server.go:79] "Starting device plugin registration server" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.048557 4780 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.048586 4780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.048734 4780 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.048888 4780 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.048896 4780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.058997 4780 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.072449 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.149706 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.151683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.151748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.151767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.151810 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.152447 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.237971 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.238183 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.239698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.239763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.239790 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.240203 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.240351 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.240437 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242664 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.242892 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.243255 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.243279 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.244525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.244568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.244588 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.246925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.246964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.246982 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.247238 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.247362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.247425 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248644 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248757 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.248943 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.249276 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.249352 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.249932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.249972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.249989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.250261 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.250336 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251442 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.251499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321729 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321778 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321810 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321843 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321887 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321909 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321928 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.321989 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.322005 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.322021 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.322038 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.322054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.352698 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.355339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.355392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.355404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.355438 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.356230 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423553 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423585 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423679 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423744 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423780 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423821 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423856 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423891 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.423926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424660 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424862 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424916 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.424963 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425007 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425057 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425225 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425273 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425365 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.425458 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.473645 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.577347 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.584112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.613867 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.631085 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-38b1cdcdad2f3dbbacea1f0fff23f4cc1ae434bc9a27df6d0b33a0581541c234 WatchSource:0}: Error finding container 38b1cdcdad2f3dbbacea1f0fff23f4cc1ae434bc9a27df6d0b33a0581541c234: Status 404 returned error can't find the container with id 38b1cdcdad2f3dbbacea1f0fff23f4cc1ae434bc9a27df6d0b33a0581541c234 Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.631940 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-56ff92d930f84f13106251005d2b802bef7dad6b2ddf662077eebdaacb055db2 WatchSource:0}: Error finding container 56ff92d930f84f13106251005d2b802bef7dad6b2ddf662077eebdaacb055db2: Status 404 returned error can't find the container with id 56ff92d930f84f13106251005d2b802bef7dad6b2ddf662077eebdaacb055db2 Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.633287 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.636285 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2bfb13cd1f99313026ae1323ff9a8083fbf8471913f38d2569ef45bff2501e64 WatchSource:0}: Error finding container 2bfb13cd1f99313026ae1323ff9a8083fbf8471913f38d2569ef45bff2501e64: Status 404 returned error can't find the container with id 2bfb13cd1f99313026ae1323ff9a8083fbf8471913f38d2569ef45bff2501e64 Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.641598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.649010 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a7d147811f6f975ae6a903e536e6320720f12ca3e9cab9e16037d37095eda3d6 WatchSource:0}: Error finding container a7d147811f6f975ae6a903e536e6320720f12ca3e9cab9e16037d37095eda3d6: Status 404 returned error can't find the container with id a7d147811f6f975ae6a903e536e6320720f12ca3e9cab9e16037d37095eda3d6 Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.664856 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-06519e9b6c702688c6710adf229246362a68657b84b7315ab01a68550120c479 WatchSource:0}: Error finding container 06519e9b6c702688c6710adf229246362a68657b84b7315ab01a68550120c479: Status 404 returned error can't find the container with id 06519e9b6c702688c6710adf229246362a68657b84b7315ab01a68550120c479 Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.757048 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.758904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.758965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.758983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.759018 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.759649 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.867831 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.870078 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:17:43.90985882 +0000 UTC Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.946680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06519e9b6c702688c6710adf229246362a68657b84b7315ab01a68550120c479"} Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.950028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a7d147811f6f975ae6a903e536e6320720f12ca3e9cab9e16037d37095eda3d6"} Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.951915 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2bfb13cd1f99313026ae1323ff9a8083fbf8471913f38d2569ef45bff2501e64"} Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.953102 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"56ff92d930f84f13106251005d2b802bef7dad6b2ddf662077eebdaacb055db2"} Feb 19 08:20:58 crc kubenswrapper[4780]: I0219 08:20:58.954531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"38b1cdcdad2f3dbbacea1f0fff23f4cc1ae434bc9a27df6d0b33a0581541c234"} Feb 19 08:20:58 crc kubenswrapper[4780]: W0219 08:20:58.964443 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:58 crc kubenswrapper[4780]: E0219 08:20:58.964566 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:59 crc kubenswrapper[4780]: W0219 08:20:59.247691 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.248304 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.275256 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Feb 19 08:20:59 crc kubenswrapper[4780]: W0219 08:20:59.328049 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.328213 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:59 crc kubenswrapper[4780]: W0219 08:20:59.410528 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.410656 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.559835 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.561867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.561907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.561923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.561953 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.562772 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.867931 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.870264 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:02:05.309023351 +0000 UTC Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.928414 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:20:59 crc kubenswrapper[4780]: E0219 08:20:59.930236 4780 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.961485 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.961591 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.964066 4780 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd" exitCode=0 Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.964227 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.964238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.965467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.965576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.965650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.966853 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957" exitCode=0 Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.967015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.967097 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.968555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.968608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.968627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.969825 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="83f9475292313709ed188aad4cb0ee950e08bfe152fadf5f9d30955a397bb142" exitCode=0 Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.969934 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"83f9475292313709ed188aad4cb0ee950e08bfe152fadf5f9d30955a397bb142"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.970153 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.970789 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.971696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.971765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.971791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.972866 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.972917 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.972941 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.973023 4780 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="96155658e4cc945e15da6f23fca3543f9b05d729a214ef055df5161ea8031315" exitCode=0 Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.973076 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"96155658e4cc945e15da6f23fca3543f9b05d729a214ef055df5161ea8031315"} Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.973102 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.974441 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.974514 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:20:59 crc kubenswrapper[4780]: I0219 08:20:59.974541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.867762 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.872181 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:28:46.009899654 +0000 UTC Feb 19 08:21:00 crc kubenswrapper[4780]: E0219 08:21:00.876415 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Feb 19 08:21:00 crc kubenswrapper[4780]: W0219 08:21:00.888402 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:21:00 crc kubenswrapper[4780]: E0219 08:21:00.888502 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.977746 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d59da7bc303d1fcb57f07cfd0f98691e1692e429ae91c8463dabd51f9110628b"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.977871 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.979191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.979224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.979233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.982784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.982822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.982889 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.983729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.983783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.983797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.987355 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.987412 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.987437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.987457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.988784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.988942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.988968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.991733 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.991792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.991805 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.994184 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ade8b416067c47a7848c5358e6a820f479613db1ce690f9e4c9d8ad5b08f947b" exitCode=0 Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.994232 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ade8b416067c47a7848c5358e6a820f479613db1ce690f9e4c9d8ad5b08f947b"} Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.994323 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.995317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.995350 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:00 crc kubenswrapper[4780]: I0219 08:21:00.995367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.163943 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.165138 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.165173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.165186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.165211 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:21:01 crc kubenswrapper[4780]: E0219 08:21:01.165731 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Feb 19 08:21:01 crc kubenswrapper[4780]: W0219 08:21:01.562212 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:21:01 crc kubenswrapper[4780]: E0219 08:21:01.562364 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:21:01 crc kubenswrapper[4780]: W0219 08:21:01.642203 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:21:01 crc kubenswrapper[4780]: E0219 08:21:01.642307 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.867687 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Feb 19 08:21:01 crc kubenswrapper[4780]: I0219 08:21:01.872509 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:33:16.994608958 +0000 UTC Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.002750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566"} Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.002824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818"} Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.002990 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.004200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.004245 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.004263 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.014824 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4e458bd0f3903df6d8c927865d49320c4d3d7b917d69fc534b0a6a0088c74323" exitCode=0 Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.014956 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.015006 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.015157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4e458bd0f3903df6d8c927865d49320c4d3d7b917d69fc534b0a6a0088c74323"} Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.015430 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.015862 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.016575 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.016890 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.016933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.016951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017373 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.017954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.018979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.019045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.019065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.820701 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:02 crc kubenswrapper[4780]: I0219 08:21:02.873024 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:19:08.572479673 +0000 UTC Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.023994 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fbe25ace38473b2f174716891d7aab95b15f3e824814188da1c16599c5844e9e"} Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.024042 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.024066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3961fe167f58c7c6b206abfb35e2657c2e1de558b878c7d632a8fef72436fc8f"} Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.024091 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6a4da2cab230efb3c4fd64ba2fded66ea96728b6d66172966998f262bfc9610"} Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.024201 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.025240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.025292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.025314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.466541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.466782 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.468805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.468877 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.468903 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.479787 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.480000 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.481855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.481951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.482064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.490847 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:03 crc kubenswrapper[4780]: I0219 08:21:03.874051 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:14:49.15662531 +0000 UTC Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.038502 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"be7baf032a7666624d21c3854d46b4c4f08e577080290593b8d0e4545e79c133"} Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.038630 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1fa9d3455b17ea7a063f92a522e13bb1129a90e8679566cdebcadb0de0a93bb7"} Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.038642 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.038733 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.039386 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040454 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.041029 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.041055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.040980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.198260 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.366510 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.367985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.368028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.368046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.368080 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.762267 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 08:21:04 crc kubenswrapper[4780]: I0219 08:21:04.874276 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:33:51.012919532 +0000 UTC Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.041098 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.042561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.042614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.042627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.283458 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.283725 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.285343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.285380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.285392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.402539 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.402793 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.404561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.404604 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.404622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:05 crc kubenswrapper[4780]: I0219 08:21:05.874730 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:07:29.221393745 +0000 UTC Feb 19 08:21:06 crc kubenswrapper[4780]: I0219 08:21:06.045171 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:06 crc kubenswrapper[4780]: I0219 08:21:06.046317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:06 crc kubenswrapper[4780]: I0219 08:21:06.046395 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:06 crc kubenswrapper[4780]: I0219 08:21:06.046422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:06 crc kubenswrapper[4780]: I0219 08:21:06.875095 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:59:21.716992865 +0000 UTC Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.414766 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.415037 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.416580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.416624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.416640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.862471 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.862760 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.864987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.865047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.865059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:07 crc kubenswrapper[4780]: I0219 08:21:07.875699 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:18:58.692322539 +0000 UTC Feb 19 08:21:08 crc kubenswrapper[4780]: E0219 08:21:08.059870 4780 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 08:21:08 crc kubenswrapper[4780]: I0219 08:21:08.875883 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:08:13.046494235 +0000 UTC Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.498617 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.498920 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.500562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.500601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.500614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.504654 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:09 crc kubenswrapper[4780]: I0219 08:21:09.876405 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:34:49.247378626 +0000 UTC Feb 19 08:21:10 crc kubenswrapper[4780]: I0219 08:21:10.058358 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:10 crc kubenswrapper[4780]: I0219 08:21:10.059563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:10 crc kubenswrapper[4780]: I0219 08:21:10.059626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:10 crc kubenswrapper[4780]: I0219 08:21:10.059639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:10 crc kubenswrapper[4780]: I0219 08:21:10.885315 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:24:39.97104529 +0000 UTC Feb 19 08:21:11 crc kubenswrapper[4780]: I0219 08:21:11.885921 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:24:41.034375176 +0000 UTC Feb 19 08:21:12 crc kubenswrapper[4780]: W0219 08:21:12.384731 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.384946 4780 trace.go:236] Trace[1442754398]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:21:02.382) (total time: 10002ms): Feb 19 08:21:12 crc kubenswrapper[4780]: Trace[1442754398]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:21:12.384) Feb 19 08:21:12 crc kubenswrapper[4780]: Trace[1442754398]: [10.002043165s] [10.002043165s] END Feb 19 08:21:12 crc kubenswrapper[4780]: E0219 08:21:12.384988 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.493843 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34982->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.493940 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34982->192.168.126.11:17697: read: connection reset by peer" Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.499033 4780 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.499169 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.821565 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.821695 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.868175 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:21:12 crc kubenswrapper[4780]: I0219 08:21:12.886436 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:09:25.019376588 +0000 UTC Feb 19 08:21:12 crc kubenswrapper[4780]: E0219 08:21:12.899172 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1895981822435fe1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:20:57.861636065 +0000 UTC m=+0.605293544,LastTimestamp:2026-02-19 08:20:57.861636065 +0000 UTC m=+0.605293544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.069744 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.072720 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566" exitCode=255 Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.072771 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566"} Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.072964 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.074293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.074381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.074409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.075644 4780 scope.go:117] "RemoveContainer" containerID="97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.208078 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.208165 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 08:21:13 crc kubenswrapper[4780]: I0219 08:21:13.887030 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:50:13.455225181 +0000 UTC Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.080237 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.083654 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be"} Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.083900 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.085352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.085419 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.085437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:14 crc kubenswrapper[4780]: I0219 08:21:14.887664 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:26:04.103245079 +0000 UTC Feb 19 08:21:15 crc kubenswrapper[4780]: I0219 08:21:15.888680 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:05:03.466790713 +0000 UTC Feb 19 08:21:16 crc kubenswrapper[4780]: I0219 08:21:16.889737 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:16:59.303282424 +0000 UTC Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.469321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.469667 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.471317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.471372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.471390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.488313 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.829766 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.830025 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.830272 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.831906 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.831967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.831989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.837445 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:17 crc kubenswrapper[4780]: I0219 08:21:17.890724 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:42:43.091213591 +0000 UTC Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.060026 4780 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.096354 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.096442 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098260 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098260 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.098377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.195215 4780 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.197434 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.203102 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.203435 4780 trace.go:236] Trace[1248272961]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:21:06.430) (total time: 11772ms): Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[1248272961]: ---"Objects listed" error: 11772ms (08:21:18.203) Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[1248272961]: [11.772446281s] [11.772446281s] END Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.203476 4780 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.203967 4780 trace.go:236] Trace[1390242245]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:21:07.493) (total time: 10710ms): Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[1390242245]: ---"Objects listed" error: 10710ms (08:21:18.203) Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[1390242245]: [10.710166354s] [10.710166354s] END Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.204047 4780 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.204435 4780 trace.go:236] Trace[531415538]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:21:05.544) (total time: 12659ms): Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[531415538]: ---"Objects listed" error: 12659ms (08:21:18.204) Feb 19 08:21:18 crc kubenswrapper[4780]: Trace[531415538]: [12.659872678s] [12.659872678s] END Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.204471 4780 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.205945 4780 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.218091 4780 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.891454 4780 apiserver.go:52] "Watching apiserver" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.891470 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:58:56.564606119 +0000 UTC Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.923791 4780 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.924146 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.924515 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.924551 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.924806 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.926950 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.927032 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.927171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.927826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.927813 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:18 crc kubenswrapper[4780]: E0219 08:21:18.928524 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.933745 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.933961 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934210 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934336 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934655 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934902 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934908 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.934924 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.935031 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.968973 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.970708 4780 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 08:21:18 crc kubenswrapper[4780]: I0219 08:21:18.983755 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.005361 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013519 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013575 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013606 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013629 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013655 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013722 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013778 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013869 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013923 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013958 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.013965 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014075 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014110 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014168 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014238 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014364 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014398 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014429 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014463 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.014900 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.015231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.015400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.015545 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.015706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016206 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016315 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016395 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016879 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016946 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.017265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.017574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.017634 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.017579 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.017589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.016694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018111 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018247 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018322 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018761 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018566 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018874 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018979 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.018776 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019014 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019036 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019138 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019166 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019193 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019221 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019270 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019299 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019347 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019417 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019425 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019442 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019760 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019806 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019847 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.019963 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020028 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020097 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020212 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020281 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020335 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020459 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020633 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020689 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020798 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020911 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020979 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021047 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021111 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021280 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021346 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021482 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021671 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021740 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021798 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021870 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021979 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022104 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022194 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022265 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022468 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020218 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020926 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.020936 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.021416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022426 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.023191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.023332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.023400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.023954 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024095 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024241 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024334 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024530 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024656 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.024873 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025270 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025615 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025707 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.025838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.026141 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.026147 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.026540 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.026579 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.022589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.026813 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027259 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027367 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027444 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027519 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027598 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027669 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027741 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027809 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028092 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028259 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028339 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028548 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028615 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028689 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028938 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029092 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029300 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029436 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029513 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029582 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029668 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029775 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029942 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030093 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030211 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030291 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030362 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030444 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030606 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030756 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030829 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030968 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031041 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031113 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031208 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031276 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031344 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031419 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031554 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031622 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031688 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031760 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031828 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031892 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032372 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032450 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032517 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032582 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032664 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032808 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032882 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032950 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033091 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033186 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033258 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033325 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033543 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033609 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033784 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033925 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034007 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034075 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027297 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027331 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.027741 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028061 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028080 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028110 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028409 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028449 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.028801 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034414 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029143 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029635 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029659 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029846 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.029991 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031438 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031525 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031670 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.031920 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032337 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032432 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032562 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032748 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.032743 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033269 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033371 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.033748 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034448 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034843 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034919 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035203 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035281 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035464 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035492 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035767 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.035971 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036119 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036442 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036417 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036538 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.036661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.037612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.037938 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.037691 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.037954 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038177 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.034261 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038288 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038313 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038355 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038846 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038891 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038920 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038974 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039000 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039029 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039058 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039085 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039111 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039190 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039224 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039252 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039393 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039437 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039543 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039574 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039601 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039628 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039656 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039684 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039707 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039734 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039763 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039886 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039911 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039926 4780 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039941 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039955 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039969 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039982 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039998 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040014 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040030 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040044 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040058 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040071 4780 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040085 4780 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040099 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040113 4780 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040148 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040163 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040177 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040190 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040204 4780 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040218 4780 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040232 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040248 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040264 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040279 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040294 4780 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040306 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040328 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040343 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040357 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040370 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040382 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040397 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040411 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040424 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040439 4780 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040452 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040466 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040479 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040493 4780 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040505 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040519 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040535 4780 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040549 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040563 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040579 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040594 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040609 4780 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040623 4780 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040641 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040654 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040667 4780 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040681 4780 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040693 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040706 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040719 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040731 4780 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040743 4780 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040755 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040767 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040777 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040789 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040800 4780 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040815 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040827 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040839 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040854 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040867 4780 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040879 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040892 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040903 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040915 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040926 4780 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040939 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040950 4780 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040964 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040976 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040988 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041002 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041016 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041028 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041040 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041052 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041065 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041079 4780 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041094 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041107 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041150 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041162 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041174 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041186 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041202 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041214 4780 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041226 4780 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041237 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041251 4780 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041263 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041276 4780 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041289 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041305 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041318 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041332 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041347 4780 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041359 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038315 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042315 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038360 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.038642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042738 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.043092 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.043422 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.043561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.043668 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.043825 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044011 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044202 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039773 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039972 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039992 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040108 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.040583 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039846 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041052 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041293 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041282 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041560 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.041672 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:19.541628747 +0000 UTC m=+22.285286236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.045026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.045392 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.041834 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042012 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042201 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042193 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.042193 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044320 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044836 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.044887 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.039221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.030210 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.046673 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.046757 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.045439 4780 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.047203 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.047336 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:19.547315781 +0000 UTC m=+22.290973230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.047419 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.047528 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.047694 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048030 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048184 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048251 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048340 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048353 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048630 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.048905 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.049278 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:19.549263587 +0000 UTC m=+22.292921036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.049269 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.049862 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.077363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.077776 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.078154 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.078178 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.078198 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.078270 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:19.578245161 +0000 UTC m=+22.321902610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.082744 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.083390 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.083733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.088409 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.092522 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.092625 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.092685 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.092817 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:19.592790514 +0000 UTC m=+22.336447963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.093229 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.094180 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.094176 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.095245 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.095843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.097986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.098344 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.098610 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.099068 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.100118 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.100090 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.101200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.101440 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.101574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.102377 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.102760 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.102909 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103037 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103173 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103810 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103860 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.103867 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.104214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.104536 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.105094 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.104537 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.105755 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.105976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.109856 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.110323 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.110472 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.110605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.111442 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.111941 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.113795 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.115720 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.116144 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.128245 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.128610 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.131485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.132684 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142402 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142489 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142548 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142564 4780 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142578 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142593 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142609 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142624 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142637 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142648 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142661 4780 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142674 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142686 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142697 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142709 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142722 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142735 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142747 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142763 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142778 4780 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142790 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142802 4780 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142820 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142832 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142844 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142855 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142870 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142882 4780 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142894 4780 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142905 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142918 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142931 4780 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142945 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142958 4780 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142974 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.142988 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143001 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143014 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143026 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143038 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143052 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143065 4780 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143078 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143089 4780 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143101 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143116 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143148 4780 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143162 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143173 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143185 4780 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143199 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143212 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143225 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143237 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143250 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143262 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143278 4780 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143291 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143302 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143315 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143327 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143341 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143353 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143368 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143382 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143397 4780 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143408 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143420 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143472 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143483 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143493 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143501 4780 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143511 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143520 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143529 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143527 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143539 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143596 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143620 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143632 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143641 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143650 4780 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143659 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143667 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143676 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143710 4780 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143733 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143742 4780 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143751 4780 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143760 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.143769 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.144152 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.244512 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.244617 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.255254 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.263606 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:21:19 crc kubenswrapper[4780]: W0219 08:21:19.269481 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-819498e7f724aad51e6c429d2215901458c4f70d19045ec91fd8cf84fd4de277 WatchSource:0}: Error finding container 819498e7f724aad51e6c429d2215901458c4f70d19045ec91fd8cf84fd4de277: Status 404 returned error can't find the container with id 819498e7f724aad51e6c429d2215901458c4f70d19045ec91fd8cf84fd4de277 Feb 19 08:21:19 crc kubenswrapper[4780]: W0219 08:21:19.281857 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-59c6f163665251f0b5e47ae3604cd52e1f32278b22ffa1a0e9f218b88356c136 WatchSource:0}: Error finding container 59c6f163665251f0b5e47ae3604cd52e1f32278b22ffa1a0e9f218b88356c136: Status 404 returned error can't find the container with id 59c6f163665251f0b5e47ae3604cd52e1f32278b22ffa1a0e9f218b88356c136 Feb 19 08:21:19 crc kubenswrapper[4780]: W0219 08:21:19.286001 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5490402c21ebf8509ce7c82508a6e8dca8ffd5078cf35cd978e2825890163fbd WatchSource:0}: Error finding container 5490402c21ebf8509ce7c82508a6e8dca8ffd5078cf35cd978e2825890163fbd: Status 404 returned error can't find the container with id 5490402c21ebf8509ce7c82508a6e8dca8ffd5078cf35cd978e2825890163fbd Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.505106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.514266 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.519849 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.525037 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.532155 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.545781 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.546372 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.546572 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:20.546545786 +0000 UTC m=+23.290203255 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.564422 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.581084 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.600941 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.617314 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.633071 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.647542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.647611 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.647642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.647674 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.647826 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.647895 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:20.647875238 +0000 UTC m=+23.391532687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.647939 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.647960 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:20.647954399 +0000 UTC m=+23.391611838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648028 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648058 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648073 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648108 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:20.648093343 +0000 UTC m=+23.391750792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648358 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648400 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648415 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: E0219 08:21:19.648493 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:20.648470052 +0000 UTC m=+23.392127521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.648450 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.661995 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.680272 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.699475 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.714966 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.726590 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.737802 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.893327 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:28:42.993590052 +0000 UTC Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.944694 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.945990 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.948869 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.950557 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.952921 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.954169 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.956555 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.958488 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.960615 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.961700 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.962959 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.963920 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.965144 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.965848 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.967154 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.967846 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.968625 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.969646 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.970460 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.971765 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.972381 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.973144 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.974330 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.975216 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.976487 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.977335 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.978163 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.978686 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.979370 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.979872 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.980534 4780 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.980678 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.982104 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.982598 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.983012 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.984165 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.984818 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.985357 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.986044 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.990054 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.990565 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.991563 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.992601 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.993238 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.994061 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.994618 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.995487 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.996244 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.997074 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.997540 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.997991 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.998921 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 08:21:19 crc kubenswrapper[4780]: I0219 08:21:19.999735 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.000581 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.111016 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4"} Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.111158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"819498e7f724aad51e6c429d2215901458c4f70d19045ec91fd8cf84fd4de277"} Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.112567 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5490402c21ebf8509ce7c82508a6e8dca8ffd5078cf35cd978e2825890163fbd"} Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.115513 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff"} Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.115682 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc"} Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.115707 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"59c6f163665251f0b5e47ae3604cd52e1f32278b22ffa1a0e9f218b88356c136"} Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.127399 4780 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.133905 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.153762 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.170937 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.189395 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.208832 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.243997 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.277592 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.291062 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.303276 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.314908 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.334379 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.349045 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.362404 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.375698 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.392322 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.406561 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.556613 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.556820 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:22.556781553 +0000 UTC m=+25.300439052 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.658474 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.658654 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658713 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658740 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658758 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.658818 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658832 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:22.658814082 +0000 UTC m=+25.402471541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658914 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.658924 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658950 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:22.658941725 +0000 UTC m=+25.402599184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658828 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.658990 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.659004 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.659006 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.659037 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:22.659028047 +0000 UTC m=+25.402685506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.659181 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:22.65916398 +0000 UTC m=+25.402821439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.894515 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:20:32.490540812 +0000 UTC Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.938089 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.938350 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.938429 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:20 crc kubenswrapper[4780]: I0219 08:21:20.938494 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.938681 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:20 crc kubenswrapper[4780]: E0219 08:21:20.938864 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:21 crc kubenswrapper[4780]: I0219 08:21:21.895369 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:25:01.975603479 +0000 UTC Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.576744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.576955 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:26.576905139 +0000 UTC m=+29.320562638 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.677677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.677756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.677786 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.677811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.677838 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.677962 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:26.677936374 +0000 UTC m=+29.421593833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.677989 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678360 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:26.678081647 +0000 UTC m=+29.421739156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678095 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678426 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678446 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678450 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678516 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678539 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678548 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:26.678517577 +0000 UTC m=+29.422175076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.678637 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:26.6786069 +0000 UTC m=+29.422264379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.896323 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:57:51.43045164 +0000 UTC Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.937378 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.937598 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.937608 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:22 crc kubenswrapper[4780]: I0219 08:21:22.937715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.937855 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:22 crc kubenswrapper[4780]: E0219 08:21:22.938200 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.128001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77"} Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.149694 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.173611 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.195162 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.220238 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.243519 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.259951 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.282093 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.297823 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:23 crc kubenswrapper[4780]: I0219 08:21:23.897504 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:21:14.6343518 +0000 UTC Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.603880 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.606898 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.606966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.607037 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.607186 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.618278 4780 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.618456 4780 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.620339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.620394 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.620413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.620442 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.620462 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.652897 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.658000 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.658064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.658083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.658110 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.658160 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.677217 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.685726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.685808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.685838 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.685873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.685897 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.711385 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.716361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.716431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.716456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.716490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.716514 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.744840 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.749413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.749457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.749469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.749488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.749501 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.769455 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.769694 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.771645 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.771685 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.771696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.771717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.771729 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.874310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.874372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.874385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.874407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.874418 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.898738 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:06:36.039545972 +0000 UTC Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.937411 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.937467 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.937411 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.937569 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.937626 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:24 crc kubenswrapper[4780]: E0219 08:21:24.937809 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.950781 4780 csr.go:261] certificate signing request csr-7rv2z is approved, waiting to be issued Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.977010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.977056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.977069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.977088 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.977101 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:24Z","lastTransitionTime":"2026-02-19T08:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:24 crc kubenswrapper[4780]: I0219 08:21:24.988144 4780 csr.go:257] certificate signing request csr-7rv2z is issued Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.079975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.080025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.080035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.080053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.080064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.182825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.182882 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.182896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.182917 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.182931 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.286579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.286625 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.286637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.286659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.286674 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.390223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.390280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.390297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.390321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.390340 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.493042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.493117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.493145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.493169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.493182 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.596326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.596387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.596401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.596426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.596441 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.698908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.698953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.698963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.698984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.698996 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.759417 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-cs47t"] Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.759825 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rw5ts"] Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.760014 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jgjfm"] Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.760104 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.760261 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.760591 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mlb49"] Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.760917 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.761782 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.764794 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.765107 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.768272 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.768453 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.768625 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.768771 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.768795 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.769061 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.769609 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.771331 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.771433 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.771508 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.771515 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.771556 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.772082 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.772267 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skpt9"] Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.773480 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.775222 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.775713 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.775781 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.776419 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.776463 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.776463 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.778332 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.788059 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.801499 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.803829 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.803895 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.803907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.803931 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.803942 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.816377 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.832355 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.846103 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.862503 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.877188 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.888249 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.899187 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:08:14.494721019 +0000 UTC Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.906243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.906302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.906313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.906334 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.906346 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:25Z","lastTransitionTime":"2026-02-19T08:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907498 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-hostroot\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-os-release\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907624 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907686 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cni-binary-copy\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907714 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-etc-kubernetes\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907761 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdfl\" (UniqueName: \"kubernetes.io/projected/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-kube-api-access-gbdfl\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907810 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907835 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907889 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907946 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.907996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-cnibin\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfms\" (UniqueName: \"kubernetes.io/projected/f61eb1a9-489e-42f7-811c-36eb08e442d2-kube-api-access-gzfms\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908044 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908094 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/920aa359-8647-440a-842e-066313c39414-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908191 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-netns\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908270 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-kubelet\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908319 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f61eb1a9-489e-42f7-811c-36eb08e442d2-hosts-file\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-k8s-cni-cncf-io\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908382 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-multus\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-multus-certs\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908483 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/920aa359-8647-440a-842e-066313c39414-proxy-tls\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-system-cni-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-os-release\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908633 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908656 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908678 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-bin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908713 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-daemon-config\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908753 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cnibin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908790 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/920aa359-8647-440a-842e-066313c39414-rootfs\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgzk\" (UniqueName: \"kubernetes.io/projected/920aa359-8647-440a-842e-066313c39414-kube-api-access-pmgzk\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96p2w\" (UniqueName: \"kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908953 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.908988 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909030 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrcz\" (UniqueName: \"kubernetes.io/projected/a293d184-7162-4977-8158-1b459d68981b-kube-api-access-vrrcz\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909072 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909103 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-system-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909144 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-socket-dir-parent\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909168 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909275 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.909390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-conf-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.922791 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.946627 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.966417 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.986569 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.989332 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 08:16:24 +0000 UTC, rotation deadline is 2027-01-09 01:14:49.735089141 +0000 UTC Feb 19 08:21:25 crc kubenswrapper[4780]: I0219 08:21:25.989453 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7768h53m23.745641747s for next certificate rotation Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.002137 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.009853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.009962 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.009989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/920aa359-8647-440a-842e-066313c39414-proxy-tls\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-system-cni-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010057 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010204 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-os-release\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010235 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010263 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010292 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-bin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-daemon-config\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010360 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cnibin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010391 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/920aa359-8647-440a-842e-066313c39414-rootfs\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgzk\" (UniqueName: \"kubernetes.io/projected/920aa359-8647-440a-842e-066313c39414-kube-api-access-pmgzk\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010512 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010531 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010641 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-os-release\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96p2w\" (UniqueName: \"kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010684 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-bin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010755 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/920aa359-8647-440a-842e-066313c39414-rootfs\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.010927 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-system-cni-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011098 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cnibin\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrcz\" (UniqueName: \"kubernetes.io/projected/a293d184-7162-4977-8158-1b459d68981b-kube-api-access-vrrcz\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011312 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011499 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-daemon-config\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011503 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-system-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-socket-dir-parent\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-socket-dir-parent\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011639 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011690 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-system-cni-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011723 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011752 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-conf-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011782 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-hostroot\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-os-release\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011816 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011875 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cni-binary-copy\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011931 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-etc-kubernetes\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.011956 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdfl\" (UniqueName: \"kubernetes.io/projected/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-kube-api-access-gbdfl\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012027 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012148 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a293d184-7162-4977-8158-1b459d68981b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012179 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-cnibin\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012596 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-multus-conf-dir\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012656 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012764 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-os-release\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012815 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-cni-binary-copy\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012851 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-etc-kubernetes\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-hostroot\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012888 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012901 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012912 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfms\" (UniqueName: \"kubernetes.io/projected/f61eb1a9-489e-42f7-811c-36eb08e442d2-kube-api-access-gzfms\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.012993 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-cnibin\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013120 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013157 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/920aa359-8647-440a-842e-066313c39414-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013573 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-netns\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-netns\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-kubelet\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013667 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f61eb1a9-489e-42f7-811c-36eb08e442d2-hosts-file\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013697 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-k8s-cni-cncf-io\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a293d184-7162-4977-8158-1b459d68981b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013726 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-multus\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-multus-certs\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013785 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f61eb1a9-489e-42f7-811c-36eb08e442d2-hosts-file\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-multus-certs\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013841 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-kubelet\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013883 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-run-k8s-cni-cncf-io\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.013911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-host-var-lib-cni-multus\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.014211 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/920aa359-8647-440a-842e-066313c39414-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.014263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.019796 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.029045 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.029103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/920aa359-8647-440a-842e-066313c39414-proxy-tls\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.038085 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgzk\" (UniqueName: \"kubernetes.io/projected/920aa359-8647-440a-842e-066313c39414-kube-api-access-pmgzk\") pod \"machine-config-daemon-rw5ts\" (UID: \"920aa359-8647-440a-842e-066313c39414\") " pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.041010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96p2w\" (UniqueName: \"kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w\") pod \"ovnkube-node-skpt9\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.042204 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfms\" (UniqueName: \"kubernetes.io/projected/f61eb1a9-489e-42f7-811c-36eb08e442d2-kube-api-access-gzfms\") pod \"node-resolver-cs47t\" (UID: \"f61eb1a9-489e-42f7-811c-36eb08e442d2\") " pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.044655 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrcz\" (UniqueName: \"kubernetes.io/projected/a293d184-7162-4977-8158-1b459d68981b-kube-api-access-vrrcz\") pod \"multus-additional-cni-plugins-mlb49\" (UID: \"a293d184-7162-4977-8158-1b459d68981b\") " pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.049108 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.054115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdfl\" (UniqueName: \"kubernetes.io/projected/c3eeec30-c76f-4ae2-9384-ebd13ac5eed5-kube-api-access-gbdfl\") pod \"multus-jgjfm\" (UID: \"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\") " pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.063273 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.083222 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cs47t" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.090445 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.092118 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.096444 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jgjfm" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.102530 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mlb49" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.108710 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.113191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.113227 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.113240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.113262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.113278 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: W0219 08:21:26.113786 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920aa359_8647_440a_842e_066313c39414.slice/crio-8966b2b953296a3fefe986335842dacabeb37ef80a045cb3083f644f5ba79640 WatchSource:0}: Error finding container 8966b2b953296a3fefe986335842dacabeb37ef80a045cb3083f644f5ba79640: Status 404 returned error can't find the container with id 8966b2b953296a3fefe986335842dacabeb37ef80a045cb3083f644f5ba79640 Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.116766 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: W0219 08:21:26.121230 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3eeec30_c76f_4ae2_9384_ebd13ac5eed5.slice/crio-a9c1a3bc83c73c836397ac15756ed299d7754dd8f4c6e4d696b6adaea381a7f1 WatchSource:0}: Error finding container a9c1a3bc83c73c836397ac15756ed299d7754dd8f4c6e4d696b6adaea381a7f1: Status 404 returned error can't find the container with id a9c1a3bc83c73c836397ac15756ed299d7754dd8f4c6e4d696b6adaea381a7f1 Feb 19 08:21:26 crc kubenswrapper[4780]: W0219 08:21:26.123471 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda293d184_7162_4977_8158_1b459d68981b.slice/crio-e723c5a288c848b2b2d2ad88aff5c29b7a3318a22e84dae0aefe9c22ea7ad23d WatchSource:0}: Error finding container e723c5a288c848b2b2d2ad88aff5c29b7a3318a22e84dae0aefe9c22ea7ad23d: Status 404 returned error can't find the container with id e723c5a288c848b2b2d2ad88aff5c29b7a3318a22e84dae0aefe9c22ea7ad23d Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.129875 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: W0219 08:21:26.136288 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e649075_d5ae_4d3a_b0af_b8f7f7784035.slice/crio-ead299c366ff1bcc4055f6197b6346e779c44b41dae9db9797f540255b993804 WatchSource:0}: Error finding container ead299c366ff1bcc4055f6197b6346e779c44b41dae9db9797f540255b993804: Status 404 returned error can't find the container with id ead299c366ff1bcc4055f6197b6346e779c44b41dae9db9797f540255b993804 Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.139180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"8966b2b953296a3fefe986335842dacabeb37ef80a045cb3083f644f5ba79640"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.143766 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.144989 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cs47t" event={"ID":"f61eb1a9-489e-42f7-811c-36eb08e442d2","Type":"ContainerStarted","Data":"1ea5c3fc3d629cc2045c6134b9f508dedbfee8f64322ba198ebca743e0013662"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.147737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerStarted","Data":"e723c5a288c848b2b2d2ad88aff5c29b7a3318a22e84dae0aefe9c22ea7ad23d"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.151263 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerStarted","Data":"a9c1a3bc83c73c836397ac15756ed299d7754dd8f4c6e4d696b6adaea381a7f1"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.166030 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.185021 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.225680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.225716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.225725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.225745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.225758 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.328671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.328712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.328721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.328736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.328745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.430791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.430829 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.430842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.430864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.430876 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.533235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.533293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.533309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.533328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.533340 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.619264 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.619469 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:34.619438655 +0000 UTC m=+37.363096104 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.637583 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.637641 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.637732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.637821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.637837 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.720326 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.720394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.720435 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.720478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720519 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720540 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720552 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720574 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720577 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720595 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720687 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720607 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:34.720592783 +0000 UTC m=+37.464250232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720739 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720786 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:34.720730836 +0000 UTC m=+37.464388285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720814 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:34.720805138 +0000 UTC m=+37.464462587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.720829 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:34.720821889 +0000 UTC m=+37.464479338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.740551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.740616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.740634 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.740670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.740689 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.842799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.842842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.842852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.842868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.842878 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.900351 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:49:57.142352433 +0000 UTC Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.937931 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.937962 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.937939 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.938106 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.938237 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:26 crc kubenswrapper[4780]: E0219 08:21:26.938314 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.944856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.944904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.944914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.944934 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:26 crc kubenswrapper[4780]: I0219 08:21:26.944944 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:26Z","lastTransitionTime":"2026-02-19T08:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.048055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.048098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.048107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.048146 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.048159 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.151299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.151349 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.151363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.151386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.151410 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.158335 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.158434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.160349 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cs47t" event={"ID":"f61eb1a9-489e-42f7-811c-36eb08e442d2","Type":"ContainerStarted","Data":"fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.162557 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" exitCode=0 Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.162638 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.162693 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"ead299c366ff1bcc4055f6197b6346e779c44b41dae9db9797f540255b993804"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.164545 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d" exitCode=0 Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.164616 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.167265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerStarted","Data":"f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.177842 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.196148 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.214945 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.230755 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.248874 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.254968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.255009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.255021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.255041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.255051 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.265281 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.284631 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.316204 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.335886 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.350333 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.358566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.358605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.358631 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.358648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.358662 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.362476 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.372948 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.384818 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.439097 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.456897 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.462475 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.462530 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.462541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.462561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.462572 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.469014 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.489151 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.511326 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.525998 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.543172 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.558492 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.566720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.566776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.566790 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.566809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.566824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.575168 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.589437 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.604322 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.618172 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.632107 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.669946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.669995 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.670005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.670027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.670049 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.698899 4780 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.772805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.772845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.772856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.772873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.772884 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.877111 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.877731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.877743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.877764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.877776 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.901387 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:57:35.382594247 +0000 UTC Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.964993 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.980636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.980683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.980695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.980714 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.980727 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:27Z","lastTransitionTime":"2026-02-19T08:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:27 crc kubenswrapper[4780]: I0219 08:21:27.984778 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.004241 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.021449 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.054658 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.076365 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.084278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.084327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.084340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.084362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.084379 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.103603 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.122845 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.145858 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.161018 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.172808 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerStarted","Data":"41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.175896 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.182245 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.182344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.182359 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.182375 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.187544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.187607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.187628 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.187654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.187674 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.200362 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.227266 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.243937 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.262004 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.276538 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.297908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.297957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.297968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.297988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.297998 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.299443 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.344078 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.388845 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.402869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.402916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.402929 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.402953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.402968 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.403853 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.417227 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.434013 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.450809 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.464413 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.472554 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-59w6b"] Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.473011 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.474824 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.475241 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.475409 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.475550 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.477405 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.490031 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.504457 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.506803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.506832 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.506842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.506859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.506869 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.520146 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.535365 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.548980 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.561463 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.575610 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.590926 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.605294 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.609019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.609058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.609069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.609089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.609102 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.620148 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.641825 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.654466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea29fbcd-2cce-4482-87e2-2af59c52beed-host\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.654530 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ddl\" (UniqueName: \"kubernetes.io/projected/ea29fbcd-2cce-4482-87e2-2af59c52beed-kube-api-access-s5ddl\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.654563 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea29fbcd-2cce-4482-87e2-2af59c52beed-serviceca\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.664686 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.676387 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.692454 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.703754 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.711852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.711891 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.711901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.711916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.711927 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.755265 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea29fbcd-2cce-4482-87e2-2af59c52beed-serviceca\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.755377 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea29fbcd-2cce-4482-87e2-2af59c52beed-host\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.755461 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ddl\" (UniqueName: \"kubernetes.io/projected/ea29fbcd-2cce-4482-87e2-2af59c52beed-kube-api-access-s5ddl\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.756092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea29fbcd-2cce-4482-87e2-2af59c52beed-host\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.756364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea29fbcd-2cce-4482-87e2-2af59c52beed-serviceca\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.779897 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ddl\" (UniqueName: \"kubernetes.io/projected/ea29fbcd-2cce-4482-87e2-2af59c52beed-kube-api-access-s5ddl\") pod \"node-ca-59w6b\" (UID: \"ea29fbcd-2cce-4482-87e2-2af59c52beed\") " pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.792995 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-59w6b" Feb 19 08:21:28 crc kubenswrapper[4780]: W0219 08:21:28.808373 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea29fbcd_2cce_4482_87e2_2af59c52beed.slice/crio-375721dfe1bddb7ed8df69dc2bb22abefe1e26eeb37993c35e7ca0b499671af1 WatchSource:0}: Error finding container 375721dfe1bddb7ed8df69dc2bb22abefe1e26eeb37993c35e7ca0b499671af1: Status 404 returned error can't find the container with id 375721dfe1bddb7ed8df69dc2bb22abefe1e26eeb37993c35e7ca0b499671af1 Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.814916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.814970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.814990 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.815021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.815040 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.901782 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:24:42.228519562 +0000 UTC Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.918617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.918674 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.918693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.918720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.918743 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:28Z","lastTransitionTime":"2026-02-19T08:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.937369 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.937540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:28 crc kubenswrapper[4780]: I0219 08:21:28.937587 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:28 crc kubenswrapper[4780]: E0219 08:21:28.937716 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:28 crc kubenswrapper[4780]: E0219 08:21:28.937807 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:28 crc kubenswrapper[4780]: E0219 08:21:28.937927 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.022697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.022782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.022828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.022852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.022891 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.126908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.127457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.127478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.127505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.127527 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.190980 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.191081 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.193373 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745" exitCode=0 Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.193531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.196978 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-59w6b" event={"ID":"ea29fbcd-2cce-4482-87e2-2af59c52beed","Type":"ContainerStarted","Data":"63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.197053 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-59w6b" event={"ID":"ea29fbcd-2cce-4482-87e2-2af59c52beed","Type":"ContainerStarted","Data":"375721dfe1bddb7ed8df69dc2bb22abefe1e26eeb37993c35e7ca0b499671af1"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.209883 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.226947 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.229693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.229752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.229768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.229802 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.229815 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.242095 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.259368 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.277038 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.299979 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.320028 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.331833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.331871 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.331898 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.331917 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.331931 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.337701 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.357728 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.371638 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.387799 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.403435 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.430678 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.435191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.435234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.435248 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.435267 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.435280 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.442249 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.454487 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.473082 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.496768 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.507710 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.521870 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.533957 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.539503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.539541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.539550 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.539577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.539586 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.546372 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.559661 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.575258 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.588052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.600047 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.613493 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.625860 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.641050 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.642819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.642857 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.642867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.642885 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.642895 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.746203 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.746267 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.746285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.746332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.746350 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.849700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.849776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.849796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.849823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.849844 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.902607 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:29:12.846849483 +0000 UTC Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.951655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.951732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.951750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.951781 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:29 crc kubenswrapper[4780]: I0219 08:21:29.951802 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:29Z","lastTransitionTime":"2026-02-19T08:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.054987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.055036 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.055050 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.055074 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.055087 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.157446 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.157734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.157802 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.157884 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.157987 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.205407 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad" exitCode=0 Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.205506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.224728 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.252367 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.261162 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.261239 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.261256 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.261278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.261334 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.288814 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.311069 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.333223 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.357509 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.363527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.363589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.363605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.363628 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.363646 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.374155 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.405066 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.421868 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.446766 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.467033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.467086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.467096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.467111 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.467141 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.469032 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.491694 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.505632 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.520740 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.569343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.569376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.569385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.569403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.569415 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.643482 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.660921 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.672807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.672930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.672957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.672991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.673016 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.682053 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.698146 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.717472 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.735410 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.751252 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.765570 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.775706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.775782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.775804 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.775847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.775868 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.796754 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.814746 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.832753 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.849910 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.872895 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.879067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.879140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.879156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.879180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.879198 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.888308 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.903693 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:36:03.012989816 +0000 UTC Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.912158 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.937794 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.937919 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:30 crc kubenswrapper[4780]: E0219 08:21:30.938032 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:30 crc kubenswrapper[4780]: E0219 08:21:30.938136 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.938073 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:30 crc kubenswrapper[4780]: E0219 08:21:30.938528 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.982162 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.982453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.982608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.982729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:30 crc kubenswrapper[4780]: I0219 08:21:30.982844 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:30Z","lastTransitionTime":"2026-02-19T08:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.085834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.085984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.086006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.086033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.086054 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.189335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.189414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.189433 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.189462 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.189482 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.213044 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e" exitCode=0 Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.213221 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.221681 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.236918 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.260376 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.282936 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.296565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.296630 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.296653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.296681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.296699 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.301289 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.320217 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.345160 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.357281 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.368913 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.381418 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.396090 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.399700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.399746 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.399760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.399781 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.399794 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.413328 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.425201 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.436544 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.449156 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:31Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.503363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.503410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.503424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.503445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.503460 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.606796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.607294 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.607426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.607550 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.607665 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.711619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.711679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.711704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.711736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.711761 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.814505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.814552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.814569 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.814591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.814609 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.904886 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:55:35.8097425 +0000 UTC Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.917830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.917892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.917907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.917928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:31 crc kubenswrapper[4780]: I0219 08:21:31.917946 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:31Z","lastTransitionTime":"2026-02-19T08:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.021965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.022039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.022066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.022097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.022120 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.126021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.126081 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.126098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.126160 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.126183 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.228907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.228977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.229001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.229033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.229057 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.231154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerStarted","Data":"f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.258591 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.278984 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.299939 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.325315 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.334446 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.334574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.334603 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.334690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.334756 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.349676 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.371346 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.392910 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.414396 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.439000 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.439411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.439589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.439792 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.439934 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.451441 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.467680 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.485905 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.506382 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.524316 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.543984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.544075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.544100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.544153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.544172 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.553256 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:32Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.647745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.647831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.647855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.647885 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.647908 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.751194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.751247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.751259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.751276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.751288 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.854901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.854963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.854981 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.855008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.855026 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.905168 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:31:17.135882772 +0000 UTC Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.937641 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.937648 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:32 crc kubenswrapper[4780]: E0219 08:21:32.937881 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:32 crc kubenswrapper[4780]: E0219 08:21:32.938035 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.938459 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:32 crc kubenswrapper[4780]: E0219 08:21:32.938831 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.957946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.958010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.958022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.958045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:32 crc kubenswrapper[4780]: I0219 08:21:32.958058 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:32Z","lastTransitionTime":"2026-02-19T08:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.065826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.065865 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.065877 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.065897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.065908 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.169551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.169614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.169636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.169666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.169689 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.241450 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183" exitCode=0 Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.241520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.250433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.250800 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.274844 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.281165 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.281225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.281243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.281276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.281299 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.290348 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.299230 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.327514 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.353187 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.371909 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.387789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.387872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.387891 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.387920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.387939 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.392424 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.412052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.427200 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.442974 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.459616 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.484791 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.492667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.492709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.492719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.492736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.492748 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.497723 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.509836 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.523021 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.537717 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.554477 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.567942 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.580187 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596267 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596308 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596324 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.596770 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.619705 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.632506 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.647855 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.662140 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.676689 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.696327 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.698677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.698762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.698782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.698809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.698867 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.712686 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.729411 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.745347 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.801902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.801963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.801982 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.802006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.802024 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.905696 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:34:49.595278092 +0000 UTC Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.906288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.906346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.906366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.906392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:33 crc kubenswrapper[4780]: I0219 08:21:33.906411 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:33Z","lastTransitionTime":"2026-02-19T08:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.009675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.009748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.009770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.009801 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.009824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.113964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.114023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.114042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.114072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.114093 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.217791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.217868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.217894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.217933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.217959 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.263016 4780 generic.go:334] "Generic (PLEG): container finished" podID="a293d184-7162-4977-8158-1b459d68981b" containerID="0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4" exitCode=0 Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.263115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerDied","Data":"0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.263531 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.264365 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.293455 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.310533 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.312720 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.324844 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.324902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.324920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.324945 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.324967 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.342945 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.365700 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.387334 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.413447 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.430346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.430410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.430437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.430467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.430484 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.435182 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.465347 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.483051 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.503398 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.521304 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533569 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533950 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.533995 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.556381 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.574413 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.591186 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.608665 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.626517 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.637009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.637060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.637075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.637097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.637114 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.643120 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.656419 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.671385 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.687647 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.706300 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.720644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.720875 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:21:50.720839413 +0000 UTC m=+53.464497012 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.720996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.721064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.721168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721345 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721379 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721402 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721405 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721440 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:50.721426647 +0000 UTC m=+53.465084266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721429 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721570 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:50.721535369 +0000 UTC m=+53.465192868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.721634 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:50.721599951 +0000 UTC m=+53.465257530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.722203 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.736777 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.740148 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.740183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.740194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.740215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.740226 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.754032 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.766409 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.783444 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.801203 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.821902 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.822166 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.822214 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.822229 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.822288 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:50.822268427 +0000 UTC m=+53.565926056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.843738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.843777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.843789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.843810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.843824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.906721 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:48:42.968899736 +0000 UTC Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.937314 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.937501 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.938042 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.938184 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.938266 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.938346 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.947918 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.947979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.948003 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.948942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.948988 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.964736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.965291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.965333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.965362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.965399 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:34 crc kubenswrapper[4780]: E0219 08:21:34.992154 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.998608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.998683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.998704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.998741 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:34 crc kubenswrapper[4780]: I0219 08:21:34.998762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:34Z","lastTransitionTime":"2026-02-19T08:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: E0219 08:21:35.018710 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.023809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.023867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.023887 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.023918 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.023971 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: E0219 08:21:35.049905 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.055649 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.055697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.055715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.055740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.055760 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: E0219 08:21:35.075475 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.080470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.080535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.080562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.080587 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.080604 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: E0219 08:21:35.100238 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: E0219 08:21:35.100404 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.107086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.107165 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.107188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.107215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.107234 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.211291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.211348 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.211369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.211401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.211425 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.286566 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" event={"ID":"a293d184-7162-4977-8158-1b459d68981b","Type":"ContainerStarted","Data":"6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.286660 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.310647 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.315105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.315184 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.315204 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.315232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.315252 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.331019 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.347355 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.359842 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.376655 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.391884 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.406496 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.417716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.417768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.417782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.417806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.417822 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.418797 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.437629 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.482730 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.516523 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.520731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.520773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.520786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.520803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.520815 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.536601 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.553172 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.565861 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:35Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.623919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.623975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.623988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.624010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.624020 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.727313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.727368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.727382 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.727403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.727417 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.831354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.831409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.831433 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.831464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.831487 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.907447 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:35:36.748670805 +0000 UTC Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.934910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.934967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.934980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.934999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:35 crc kubenswrapper[4780]: I0219 08:21:35.935010 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:35Z","lastTransitionTime":"2026-02-19T08:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.038193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.038248 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.038262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.038282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.038297 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.140584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.141044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.141054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.141075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.141088 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.243629 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.243673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.243684 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.243705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.243717 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.291251 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/0.log" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.294716 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a" exitCode=1 Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.294836 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.295745 4780 scope.go:117] "RemoveContainer" containerID="a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.319760 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.343805 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.347420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.347494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.347521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.347557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.347582 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.366507 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.391412 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.412661 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.429325 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.447994 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.449935 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.449965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.449974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.449992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.450002 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.468053 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.494314 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.506540 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.520441 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.533308 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.545568 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.553013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.553064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.553081 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.553106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.553135 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.561348 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:36Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.656762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.656823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.656845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.656876 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.656898 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.760532 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.760599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.760621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.760654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.760867 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.864104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.864202 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.864222 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.864251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.864272 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.908201 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:10:28.884808733 +0000 UTC Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.937447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.937540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.937540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:36 crc kubenswrapper[4780]: E0219 08:21:36.937690 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:36 crc kubenswrapper[4780]: E0219 08:21:36.937794 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:36 crc kubenswrapper[4780]: E0219 08:21:36.937917 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.966853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.966890 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.966903 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.966923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:36 crc kubenswrapper[4780]: I0219 08:21:36.966935 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:36Z","lastTransitionTime":"2026-02-19T08:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.069489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.069551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.069572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.069605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.069623 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.172489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.172547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.172563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.172587 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.172600 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.276291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.276358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.276381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.276413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.276434 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.302492 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/0.log" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.306232 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.306400 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.325275 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.343647 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.374186 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.379314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.379376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.379397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.379423 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.379442 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.386496 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.402831 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.434309 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.463332 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.481519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.481563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.481576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.481595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.481608 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.482172 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.500069 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.564274 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.578317 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.584408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.584470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.584491 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.584518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.584537 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.594761 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.609398 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.621551 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.688155 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.688197 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.688207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.688225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.688236 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.791997 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.792045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.792062 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.792083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.792098 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.895217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.895288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.895312 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.895340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.895358 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.908636 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:07:47.147987752 +0000 UTC Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.959808 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.986596 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:37Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.998791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.998835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.998854 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.998879 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:37 crc kubenswrapper[4780]: I0219 08:21:37.998896 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:37Z","lastTransitionTime":"2026-02-19T08:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.020688 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.023048 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx"] Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.023834 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.027506 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.028909 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.047605 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.055649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.055814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.055942 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdzj\" (UniqueName: \"kubernetes.io/projected/dec27bcf-beb5-4439-8572-997ef30fc0ec-kube-api-access-rpdzj\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.056029 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.070052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.093520 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.101527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.101596 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.101616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.101644 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.101665 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.114080 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.142052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.156587 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.156682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.156718 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.156753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdzj\" (UniqueName: \"kubernetes.io/projected/dec27bcf-beb5-4439-8572-997ef30fc0ec-kube-api-access-rpdzj\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.158282 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.158280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.165529 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.167787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec27bcf-beb5-4439-8572-997ef30fc0ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.188035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdzj\" (UniqueName: \"kubernetes.io/projected/dec27bcf-beb5-4439-8572-997ef30fc0ec-kube-api-access-rpdzj\") pod \"ovnkube-control-plane-749d76644c-kljjx\" (UID: \"dec27bcf-beb5-4439-8572-997ef30fc0ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.190605 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.205492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.205636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.205717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.205826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.205913 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.211297 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.237011 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.260501 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.282206 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.308562 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.310053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.310281 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.310438 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.310566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.310680 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.316768 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/1.log" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.318648 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/0.log" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.326845 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" exitCode=1 Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.327085 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.327383 4780 scope.go:117] "RemoveContainer" containerID="a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.328660 4780 scope.go:117] "RemoveContainer" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" Feb 19 08:21:38 crc kubenswrapper[4780]: E0219 08:21:38.328986 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.337662 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.346062 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.360515 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: W0219 08:21:38.378875 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec27bcf_beb5_4439_8572_997ef30fc0ec.slice/crio-4a15d89d32e1e8589524b59ea471562322432ea57b5321c819c69630598cbd00 WatchSource:0}: Error finding container 4a15d89d32e1e8589524b59ea471562322432ea57b5321c819c69630598cbd00: Status 404 returned error can't find the container with id 4a15d89d32e1e8589524b59ea471562322432ea57b5321c819c69630598cbd00 Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.388673 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.417023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.417080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.417095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.417116 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.417151 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.428217 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.444052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.464671 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.485716 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.509229 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.523545 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.524090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.524193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.524213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.524246 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.524263 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.545839 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.564555 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.586085 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.601996 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.647336 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.647407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.647428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.647456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.647481 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.652098 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.671810 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.702200 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.725772 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.751074 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.753899 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.753946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.753966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.753994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.754016 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.768942 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.793443 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.824023 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.843894 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.856483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.856566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.856596 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.856635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.856665 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.858891 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.871636 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.889432 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.906430 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.909338 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:21:58.341176551 +0000 UTC Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.921605 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.937887 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.937943 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.937957 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:38 crc kubenswrapper[4780]: E0219 08:21:38.938116 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:38 crc kubenswrapper[4780]: E0219 08:21:38.938369 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:38 crc kubenswrapper[4780]: E0219 08:21:38.938549 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.941364 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.959332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.959388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.959403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.959424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.959438 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:38Z","lastTransitionTime":"2026-02-19T08:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:38 crc kubenswrapper[4780]: I0219 08:21:38.964583 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:38Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.062255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.062309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.062322 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.062345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.062361 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.166058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.166102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.166114 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.166151 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.166164 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.269933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.269996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.270012 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.270037 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.270054 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.336001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" event={"ID":"dec27bcf-beb5-4439-8572-997ef30fc0ec","Type":"ContainerStarted","Data":"37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.336077 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" event={"ID":"dec27bcf-beb5-4439-8572-997ef30fc0ec","Type":"ContainerStarted","Data":"91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.336094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" event={"ID":"dec27bcf-beb5-4439-8572-997ef30fc0ec","Type":"ContainerStarted","Data":"4a15d89d32e1e8589524b59ea471562322432ea57b5321c819c69630598cbd00"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.339332 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/1.log" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.347036 4780 scope.go:117] "RemoveContainer" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" Feb 19 08:21:39 crc kubenswrapper[4780]: E0219 08:21:39.347384 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.371505 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a078d3e0c1cea5506f63f543386c4c727e0d761fa74e589e1ec8bf43b476374a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:36Z\\\",\\\"message\\\":\\\"licy event handler 4\\\\nI0219 08:21:35.993631 6072 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:35.993638 6072 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 08:21:35.993644 6072 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:21:35.993998 6072 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:35.994026 6072 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:35.994082 6072 factory.go:656] Stopping watch factory\\\\nI0219 08:21:35.994092 6072 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994106 6072 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:35.994115 6072 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:35.994255 6072 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:35.994877 6072 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:35.994904 6072 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.373510 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.373563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.373579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.373607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.373624 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.391019 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.408322 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.431674 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.450164 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.467853 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.476069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.476169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.476188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.476212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.476227 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.487704 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.500620 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.514862 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.527487 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.546694 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.560968 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579152 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579170 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579184 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.579399 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.592562 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.604343 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.619388 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.637870 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.651373 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.673762 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.682431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.682480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.682492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.682509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.682522 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.687950 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.699428 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.711314 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.723204 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.734958 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.748619 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.760888 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.777080 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.784899 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.784957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.784970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.784992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.785008 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.789574 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.805290 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.818705 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.887721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.887815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.887837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.887875 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.887900 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.902218 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jg765"] Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.903048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:39 crc kubenswrapper[4780]: E0219 08:21:39.903189 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.909768 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:37:16.530991544 +0000 UTC Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.920590 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.938876 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.959439 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.978563 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhsh\" (UniqueName: \"kubernetes.io/projected/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-kube-api-access-tvhsh\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.978624 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.979784 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:39Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.990767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.990845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.990865 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.990893 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:39 crc kubenswrapper[4780]: I0219 08:21:39.990917 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:39Z","lastTransitionTime":"2026-02-19T08:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.005425 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.025937 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.044027 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.060856 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.079575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.079686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvhsh\" (UniqueName: \"kubernetes.io/projected/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-kube-api-access-tvhsh\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.079836 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.079931 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:40.579902456 +0000 UTC m=+43.323559955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.088187 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.096705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.096782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.096813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.096847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.096873 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.105881 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.112042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvhsh\" (UniqueName: \"kubernetes.io/projected/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-kube-api-access-tvhsh\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.124581 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.141149 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.159077 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.178812 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.195186 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.200550 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.200598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.200613 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.200637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.200655 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.220260 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.304236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.304285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.304303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.304327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.304344 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.406382 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.406427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.406439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.406457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.406472 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.509262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.509296 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.509306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.509325 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.509335 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.584295 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.584547 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.584673 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:41.584643831 +0000 UTC m=+44.328301310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.612014 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.612072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.612090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.612119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.612167 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.715553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.715652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.715672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.715704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.715723 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.819666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.819734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.819754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.819786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.819806 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.910206 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:26:39.771254709 +0000 UTC Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.922483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.922564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.922591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.922619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.922636 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:40Z","lastTransitionTime":"2026-02-19T08:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.937877 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.937901 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:40 crc kubenswrapper[4780]: I0219 08:21:40.937946 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.938073 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.938447 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:40 crc kubenswrapper[4780]: E0219 08:21:40.938528 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.026861 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.026901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.026912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.026930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.026941 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.130114 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.130210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.130228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.130253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.130272 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.234318 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.234393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.234413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.234445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.234467 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.339292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.339346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.339362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.339386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.339402 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.442729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.442825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.442857 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.442884 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.442905 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.546927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.547018 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.547041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.547080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.547106 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.603862 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:41 crc kubenswrapper[4780]: E0219 08:21:41.604054 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:41 crc kubenswrapper[4780]: E0219 08:21:41.604208 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:43.604161977 +0000 UTC m=+46.347819466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.650520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.650682 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.650705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.650730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.650749 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.754637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.754694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.754716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.754746 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.754799 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.857819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.857881 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.857900 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.857926 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.857945 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.911400 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:31:39.527109262 +0000 UTC Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.937893 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:41 crc kubenswrapper[4780]: E0219 08:21:41.938215 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.960817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.960878 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.960896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.960918 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:41 crc kubenswrapper[4780]: I0219 08:21:41.960939 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:41Z","lastTransitionTime":"2026-02-19T08:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.064226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.064310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.064335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.064369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.064393 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.168296 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.168364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.168384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.168410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.168430 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.271733 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.271810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.271830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.271866 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.271892 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.281972 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.283592 4780 scope.go:117] "RemoveContainer" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" Feb 19 08:21:42 crc kubenswrapper[4780]: E0219 08:21:42.284050 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.374916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.374987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.375006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.375035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.375055 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.478726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.478807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.478826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.478856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.478878 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.582046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.582111 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.582167 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.582200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.582222 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.685519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.685613 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.685639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.685672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.685694 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.788651 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.788742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.788768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.788797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.788819 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.891858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.892021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.892035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.892052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.892065 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.911926 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:29:30.533864762 +0000 UTC Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.937645 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:42 crc kubenswrapper[4780]: E0219 08:21:42.937843 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.938315 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:42 crc kubenswrapper[4780]: E0219 08:21:42.938764 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.938474 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:42 crc kubenswrapper[4780]: E0219 08:21:42.939344 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.995184 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.995236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.995253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.995280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:42 crc kubenswrapper[4780]: I0219 08:21:42.995299 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:42Z","lastTransitionTime":"2026-02-19T08:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.099574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.099657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.099678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.099707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.099726 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.203998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.204085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.204103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.204161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.204182 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.307188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.307262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.307282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.307309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.307329 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.410567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.410642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.410662 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.410687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.410704 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.521544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.521619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.521637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.521664 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.521684 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.625572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.625656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.625715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.625752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.625780 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.628607 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:43 crc kubenswrapper[4780]: E0219 08:21:43.628842 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:43 crc kubenswrapper[4780]: E0219 08:21:43.628973 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:47.628931112 +0000 UTC m=+50.372588621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.729889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.729958 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.729976 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.730002 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.730020 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.833440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.833531 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.833552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.833579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.833600 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.912424 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:49:42.105679219 +0000 UTC Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.936762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.936808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.936818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.936835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.936846 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:43Z","lastTransitionTime":"2026-02-19T08:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:43 crc kubenswrapper[4780]: I0219 08:21:43.937255 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:43 crc kubenswrapper[4780]: E0219 08:21:43.937506 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.040969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.041041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.041060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.041087 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.041111 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.145003 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.145074 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.145094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.145154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.145175 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.248846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.248948 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.248968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.249021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.249038 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.352845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.352942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.352971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.352998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.353022 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.456160 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.456234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.456252 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.456285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.456307 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.560352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.560425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.560446 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.560517 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.560545 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.663512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.663581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.663608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.663640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.663663 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.767819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.767890 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.767909 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.767942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.767966 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.870421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.870470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.870483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.870501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.870515 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.913240 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:16:40.258575769 +0000 UTC Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.937773 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.937773 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.937938 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:44 crc kubenswrapper[4780]: E0219 08:21:44.938107 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:44 crc kubenswrapper[4780]: E0219 08:21:44.938302 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:44 crc kubenswrapper[4780]: E0219 08:21:44.938480 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.973634 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.973730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.973757 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.973786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:44 crc kubenswrapper[4780]: I0219 08:21:44.973809 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:44Z","lastTransitionTime":"2026-02-19T08:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.076545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.076606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.076626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.076653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.076674 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.179992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.180072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.180095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.180159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.180178 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.283953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.284011 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.284031 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.284076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.284096 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.387079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.387178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.387198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.387284 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.387308 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.462046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.462109 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.462189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.462218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.462236 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.485920 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:45Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.493014 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.493075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.493094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.493177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.493200 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.509559 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:45Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.514343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.514384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.514400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.514422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.514438 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.529278 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:45Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.533813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.533859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.533875 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.533898 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.533915 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.550058 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:45Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.554467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.554506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.554519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.554539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.554553 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.569156 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:45Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.569303 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.571317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.571360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.571377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.571400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.571420 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.674684 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.674727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.674744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.674768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.674789 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.777435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.777720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.777848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.777961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.778033 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.880752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.880827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.880846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.880873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.880893 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.914417 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:44:52.091424915 +0000 UTC Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.938109 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:45 crc kubenswrapper[4780]: E0219 08:21:45.938433 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.983679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.983745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.983765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.983789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:45 crc kubenswrapper[4780]: I0219 08:21:45.983809 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:45Z","lastTransitionTime":"2026-02-19T08:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.087800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.087911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.087931 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.087996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.088017 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.191300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.191372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.191392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.191428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.191449 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.295461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.295592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.295628 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.295717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.295800 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.399215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.399280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.399300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.399330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.399351 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.502651 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.502720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.502733 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.502759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.502773 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.606491 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.606573 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.606599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.606643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.606675 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.710051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.710092 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.710105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.710142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.710156 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.812979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.813049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.813069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.813096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.813116 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.914922 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:22:46.231119851 +0000 UTC Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.917195 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.917291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.917310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.917338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.917358 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:46Z","lastTransitionTime":"2026-02-19T08:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.937724 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.937773 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:46 crc kubenswrapper[4780]: E0219 08:21:46.937895 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:46 crc kubenswrapper[4780]: I0219 08:21:46.938303 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:46 crc kubenswrapper[4780]: E0219 08:21:46.938419 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:46 crc kubenswrapper[4780]: E0219 08:21:46.938556 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.020445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.020498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.020518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.020546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.020567 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.124311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.124393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.124413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.124444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.124464 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.228631 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.228710 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.228730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.228766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.228789 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.332229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.332319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.332343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.332538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.332567 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.436724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.436786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.436806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.436833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.436852 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.541328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.541381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.541393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.541417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.541437 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.645418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.645472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.645490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.645518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.645538 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.674581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:47 crc kubenswrapper[4780]: E0219 08:21:47.674836 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:47 crc kubenswrapper[4780]: E0219 08:21:47.674942 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:21:55.67490808 +0000 UTC m=+58.418565559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.748402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.748920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.749119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.749317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.749521 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.852464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.852511 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.852525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.852544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.852558 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.915567 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:29:52.472232304 +0000 UTC Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.937969 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:47 crc kubenswrapper[4780]: E0219 08:21:47.938448 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.958616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.958677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.958691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.958709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.958722 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:47Z","lastTransitionTime":"2026-02-19T08:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.959811 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:47Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:47 crc kubenswrapper[4780]: I0219 08:21:47.983280 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:47Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.005243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.026996 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.048595 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.062332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.062386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.062399 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.062419 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.062434 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.070904 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.096451 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.119266 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.138011 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.154270 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.166726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.166790 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.166826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.166853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.167941 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.177027 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.207867 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.223511 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.243375 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.260859 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.274817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.274871 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.274887 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.274919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.274938 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.289550 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:48Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.378961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.379332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.379423 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.379507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.379582 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.483751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.483828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.483852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.483885 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.483907 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.588019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.588066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.588083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.588107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.588171 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.690987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.691282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.691434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.691584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.691703 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.796023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.796096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.796119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.796184 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.796206 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.900110 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.900231 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.900258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.900292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.900317 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:48Z","lastTransitionTime":"2026-02-19T08:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.915821 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:20:05.886434826 +0000 UTC Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.937272 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.937363 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:48 crc kubenswrapper[4780]: I0219 08:21:48.937272 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:48 crc kubenswrapper[4780]: E0219 08:21:48.937557 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:48 crc kubenswrapper[4780]: E0219 08:21:48.937941 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:48 crc kubenswrapper[4780]: E0219 08:21:48.938025 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.004262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.004339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.004357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.004385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.004412 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.107734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.107810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.107835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.107870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.107895 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.211711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.211835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.211863 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.211895 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.211919 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.316447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.316524 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.316547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.316578 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.316598 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.420663 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.420740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.420762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.420797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.420822 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.524371 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.524432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.524452 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.524481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.524501 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.627571 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.627638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.627656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.627683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.627700 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.731889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.731963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.731983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.732016 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.732037 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.836218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.836285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.836303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.836335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.836355 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.917066 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:26:40.651510882 +0000 UTC Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.937751 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:49 crc kubenswrapper[4780]: E0219 08:21:49.937973 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.940461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.940533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.940553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.940579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:49 crc kubenswrapper[4780]: I0219 08:21:49.940597 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:49Z","lastTransitionTime":"2026-02-19T08:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.044209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.044279 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.044297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.044324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.044343 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.147986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.148058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.148076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.148105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.148157 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.251698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.251760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.251779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.251807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.251835 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.356025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.356084 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.356112 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.356186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.356212 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.459903 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.459977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.460001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.460038 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.460064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.570975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.571082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.571105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.571160 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.571193 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.674311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.674371 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.674390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.674416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.674436 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.777495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.777940 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.778458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.778682 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.779193 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.810335 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.810567 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:22:22.810521717 +0000 UTC m=+85.554179196 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.810925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.811403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.811861 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.811293 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.814242 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:22.814209574 +0000 UTC m=+85.557867053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.811790 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.812049 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.814950 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.815215 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.815010 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:22.814989993 +0000 UTC m=+85.558647482 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.815392 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:22.815348061 +0000 UTC m=+85.559005650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.883074 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.883284 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.883326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.883359 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.883380 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.912775 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.913049 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.913082 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.913102 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.913213 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:22.913187781 +0000 UTC m=+85.656845270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.918658 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:32:27.89684082 +0000 UTC Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.937443 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.937481 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.937661 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.937863 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.938193 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:50 crc kubenswrapper[4780]: E0219 08:21:50.940411 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.987329 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.987384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.987402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.987430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:50 crc kubenswrapper[4780]: I0219 08:21:50.987448 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:50Z","lastTransitionTime":"2026-02-19T08:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.092593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.092654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.092670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.092696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.092710 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.195565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.195657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.195682 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.195711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.195736 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.299788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.299844 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.299862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.299890 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.299911 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.402691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.402739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.402751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.402770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.402785 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.505699 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.505934 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.506024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.506088 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.506166 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.609694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.609760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.609777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.609805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.609823 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.712763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.712841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.712865 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.712897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.712920 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.816553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.816631 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.816657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.816689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.816712 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.919812 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:14:56.671389884 +0000 UTC Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.920769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.920847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.920867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.920896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.920916 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:51Z","lastTransitionTime":"2026-02-19T08:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:51 crc kubenswrapper[4780]: I0219 08:21:51.937822 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:51 crc kubenswrapper[4780]: E0219 08:21:51.938097 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.025869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.025949 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.025963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.025988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.026004 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.130335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.130414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.130439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.130473 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.130498 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.233016 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.233082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.233101 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.233156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.233178 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.336538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.336609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.336627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.336655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.336673 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.440559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.440637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.440659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.440697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.440720 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.544800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.544867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.544892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.544920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.544940 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.653170 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.653239 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.653262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.653304 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.653324 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.756780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.756860 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.756880 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.756914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.756936 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.860847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.860896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.860913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.860940 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.860955 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.920368 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:29:29.124204391 +0000 UTC Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.938275 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.938330 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:52 crc kubenswrapper[4780]: E0219 08:21:52.938478 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:52 crc kubenswrapper[4780]: E0219 08:21:52.938588 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.939197 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:52 crc kubenswrapper[4780]: E0219 08:21:52.939636 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.964662 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.964717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.964738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.964765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:52 crc kubenswrapper[4780]: I0219 08:21:52.964791 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:52Z","lastTransitionTime":"2026-02-19T08:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.068650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.068709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.068728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.068758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.068779 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.172748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.172831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.172855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.172891 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.172915 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.276542 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.276615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.276632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.276658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.276677 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.380095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.380409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.380515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.380598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.380659 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.473918 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.484282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.484351 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.484369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.484396 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.484414 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.490092 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.495855 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.517400 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.543873 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.579115 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.587933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.587993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.588001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.588018 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.588029 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.597532 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.617591 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.636759 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.662587 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.685090 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.691028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.691428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.691510 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.691535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.691553 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.704261 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.724995 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.746831 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.767912 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.783990 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.795568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.795670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.795698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.795734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.795755 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.806916 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.827571 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.899376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.899438 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.899454 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.899483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.899501 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:53Z","lastTransitionTime":"2026-02-19T08:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.921719 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:18:38.091813233 +0000 UTC Feb 19 08:21:53 crc kubenswrapper[4780]: I0219 08:21:53.938204 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:53 crc kubenswrapper[4780]: E0219 08:21:53.938393 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.002205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.002261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.002272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.002292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.002305 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.105868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.105998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.106017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.106055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.106075 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.209874 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.209953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.209977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.210013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.210035 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.314269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.314328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.314352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.314384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.314405 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.417569 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.417624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.417635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.417654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.417667 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.520320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.520432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.520452 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.520480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.520500 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.623756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.623813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.623827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.623849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.623863 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.727421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.727474 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.727488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.727507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.727521 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.831934 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.832001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.832021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.832056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.832076 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.922853 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:07:44.848352765 +0000 UTC Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.935615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.935683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.935700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.935731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.935752 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:54Z","lastTransitionTime":"2026-02-19T08:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.937974 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.938616 4780 scope.go:117] "RemoveContainer" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.938904 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:54 crc kubenswrapper[4780]: E0219 08:21:54.938972 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:54 crc kubenswrapper[4780]: I0219 08:21:54.939118 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:54 crc kubenswrapper[4780]: E0219 08:21:54.939180 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:54 crc kubenswrapper[4780]: E0219 08:21:54.939282 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.039327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.039805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.039877 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.039954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.040022 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.143105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.143397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.143493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.143561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.143641 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.247479 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.247542 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.247559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.247585 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.247602 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.351487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.351548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.351568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.351592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.351611 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.413753 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/1.log" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.417640 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.419321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.450243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.454365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.454445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.454462 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.454482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.454842 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.482984 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.502627 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.560095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.560153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.560164 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.560188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.560199 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.571142 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.590899 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.605288 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.621042 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.635015 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.650876 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.662145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.662191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.662200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.662218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.662228 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.664486 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.675500 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.678153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.678409 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.678527 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:11.678495899 +0000 UTC m=+74.422153388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.693019 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.728002 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.740284 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.754263 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.754384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.754403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.754433 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.754452 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.758610 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.767031 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.771052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.771163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.771196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.771229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.771252 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.772354 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.783993 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.785579 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.790792 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.790837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.790849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.790867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.790879 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.803245 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.807590 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.807639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.807659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.807689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.807708 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.826292 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.831226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.831275 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.831297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.831324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.831345 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.844286 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:55Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.844449 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.847834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.847888 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.847901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.847919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.847931 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.923704 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:14:26.331837601 +0000 UTC Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.938219 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:55 crc kubenswrapper[4780]: E0219 08:21:55.938394 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.950741 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.950795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.950809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.950827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:55 crc kubenswrapper[4780]: I0219 08:21:55.950840 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:55Z","lastTransitionTime":"2026-02-19T08:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.054875 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.054922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.054932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.054950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.054960 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.158367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.158459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.158478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.158507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.158527 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.261251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.261309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.261328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.261354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.261373 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.366287 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.366390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.366408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.366437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.366460 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.423750 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/2.log" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.424477 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/1.log" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.427590 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" exitCode=1 Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.427631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.427669 4780 scope.go:117] "RemoveContainer" containerID="c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.428418 4780 scope.go:117] "RemoveContainer" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" Feb 19 08:21:56 crc kubenswrapper[4780]: E0219 08:21:56.428571 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.450491 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.469617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.469656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.469669 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.469707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.469721 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.474316 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.492994 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.514372 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.531530 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.546737 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.565067 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.572554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.572596 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.572608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.572628 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.572641 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.582949 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.601577 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.613560 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.630467 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.658575 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4af49930456167ebad19a3e137a3397cb59fb2a5bd5ddd879fde7c0b63f4395\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:37Z\\\",\\\"message\\\":\\\"1\\\\nI0219 08:21:37.507059 6243 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:37.507446 6243 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.507672 6243 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.508030 6243 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 08:21:37.509443 6243 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 08:21:37.509467 6243 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 08:21:37.509484 6243 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:37.509490 6243 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:37.509517 6243 factory.go:656] Stopping watch factory\\\\nI0219 08:21:37.509540 6243 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:21:37.509542 6243 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:21:37.509550 6243 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:37.509559 6243 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.671261 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.675885 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.675916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.675926 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.675944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.675955 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.688416 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.700954 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.717383 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.729480 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:56Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.778613 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.778689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.778711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.778739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.778762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.881985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.882039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.882053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.882077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.882090 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.924823 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:55:12.760863447 +0000 UTC Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.937150 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.937287 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:56 crc kubenswrapper[4780]: E0219 08:21:56.937352 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.937148 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:56 crc kubenswrapper[4780]: E0219 08:21:56.937546 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:56 crc kubenswrapper[4780]: E0219 08:21:56.937683 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.985346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.985413 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.985434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.985458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:56 crc kubenswrapper[4780]: I0219 08:21:56.985477 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:56Z","lastTransitionTime":"2026-02-19T08:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.089247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.089347 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.089376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.089415 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.089443 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.193481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.193563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.193587 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.193619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.193640 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.297630 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.297689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.297705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.297726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.297745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.401493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.401561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.401580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.401607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.401626 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.435675 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/2.log" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.442509 4780 scope.go:117] "RemoveContainer" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" Feb 19 08:21:57 crc kubenswrapper[4780]: E0219 08:21:57.442844 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.464324 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.488095 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.506117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.506212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.506235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.506269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.506290 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.508554 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.530219 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.555990 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.592007 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.609842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.609795 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.609907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.610220 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.610268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.610291 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.628761 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.644895 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.664787 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.684783 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.706783 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.725579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.725668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.725687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.725720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.725740 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.735100 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.756339 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.777454 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.799543 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.822238 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.828581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.828667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.828693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.828727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.828748 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.925747 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:19:32.836857952 +0000 UTC Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.932681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.932751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.932769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.932800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.932825 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:57Z","lastTransitionTime":"2026-02-19T08:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.938192 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:57 crc kubenswrapper[4780]: E0219 08:21:57.938388 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.968419 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:57 crc kubenswrapper[4780]: I0219 08:21:57.993026 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:57Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.013814 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.034757 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.035936 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.035985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.036005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.036066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.036089 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.053733 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.073609 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.097389 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.119677 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.139817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.139862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.139883 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.139909 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.139928 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.140951 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.158526 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.179967 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.199086 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.215596 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.242949 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.244356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.244612 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.244758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.244902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.245037 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.277471 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.298833 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.320747 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:21:58Z is after 2025-08-24T17:21:41Z" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.348871 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.348950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.348979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.349013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.349040 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.452904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.452966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.452983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.453008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.453029 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.556120 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.556235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.556255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.556286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.556306 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.659328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.659416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.659434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.659469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.659487 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.762763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.762826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.762840 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.762861 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.762876 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.866617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.866716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.866730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.866752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.866765 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.926362 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:11:45.313986253 +0000 UTC Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.937730 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.937791 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:21:58 crc kubenswrapper[4780]: E0219 08:21:58.937893 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:21:58 crc kubenswrapper[4780]: E0219 08:21:58.938215 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.938606 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:21:58 crc kubenswrapper[4780]: E0219 08:21:58.938742 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.970189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.970268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.970292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.970321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:58 crc kubenswrapper[4780]: I0219 08:21:58.970340 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:58Z","lastTransitionTime":"2026-02-19T08:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.073872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.073939 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.073957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.073989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.074008 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.184660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.184728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.184747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.184774 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.184794 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.289206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.289301 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.289320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.289349 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.289370 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.393434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.393516 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.393538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.393571 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.393593 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.496959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.497039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.497057 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.497089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.497108 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.600572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.600629 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.600643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.600663 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.600676 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.704365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.704561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.704581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.704699 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.704745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.812581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.812628 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.812640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.812657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.812666 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.917348 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.917399 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.917411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.917433 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.917448 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:21:59Z","lastTransitionTime":"2026-02-19T08:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.926988 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:29:38.356405578 +0000 UTC Feb 19 08:21:59 crc kubenswrapper[4780]: I0219 08:21:59.937664 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:21:59 crc kubenswrapper[4780]: E0219 08:21:59.937925 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.020688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.020760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.020775 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.020796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.020812 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.124335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.124393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.124411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.124440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.124457 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.227181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.227261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.227284 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.227318 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.227344 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.330501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.330572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.330591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.330614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.330628 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.434277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.434391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.434411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.434442 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.434461 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.537421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.537491 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.537511 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.537541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.537560 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.644119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.644211 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.644228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.644251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.644271 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.747457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.747522 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.747538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.747562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.747579 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.850882 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.850952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.850970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.851004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.851025 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.927970 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:29:21.036592305 +0000 UTC Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.937460 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.937597 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:00 crc kubenswrapper[4780]: E0219 08:22:00.937660 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.937597 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:00 crc kubenswrapper[4780]: E0219 08:22:00.937812 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:00 crc kubenswrapper[4780]: E0219 08:22:00.937979 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.953947 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.954003 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.954022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.954048 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:00 crc kubenswrapper[4780]: I0219 08:22:00.954068 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:00Z","lastTransitionTime":"2026-02-19T08:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.057695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.057761 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.057781 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.057808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.057826 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.161181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.161244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.161262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.161292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.161311 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.264810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.264880 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.264901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.264933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.264953 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.368980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.369051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.369070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.369097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.369152 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.472464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.472539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.472559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.472591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.472610 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.575589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.575657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.575677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.575712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.575733 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.678445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.678507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.678521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.678542 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.678556 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.781552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.781632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.781656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.781689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.781714 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.884819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.884873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.884891 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.884922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.884943 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.929216 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:39:04.814599778 +0000 UTC Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.937743 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:01 crc kubenswrapper[4780]: E0219 08:22:01.937989 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.987764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.987806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.987816 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.987835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:01 crc kubenswrapper[4780]: I0219 08:22:01.987850 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:01Z","lastTransitionTime":"2026-02-19T08:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.090273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.090315 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.090331 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.090401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.090417 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.194107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.194165 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.194177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.194194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.194205 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.297828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.297925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.297951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.297988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.298020 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.401063 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.401376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.401387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.401405 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.401418 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.504475 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.504568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.504592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.504622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.504642 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.607287 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.607359 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.607372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.607388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.607398 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.709950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.710030 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.710044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.710067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.710089 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.812795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.812892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.812910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.812932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.812947 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.916032 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.916079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.916100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.916155 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.916176 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:02Z","lastTransitionTime":"2026-02-19T08:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.929800 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:48:27.095196053 +0000 UTC Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.937402 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.937459 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:02 crc kubenswrapper[4780]: E0219 08:22:02.937564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:02 crc kubenswrapper[4780]: E0219 08:22:02.937700 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:02 crc kubenswrapper[4780]: I0219 08:22:02.937849 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:02 crc kubenswrapper[4780]: E0219 08:22:02.937957 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.018994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.019027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.019038 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.019055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.019067 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.121573 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.121605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.121615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.121626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.121635 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.224401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.224495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.224519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.224549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.224569 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.327693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.327740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.327759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.327779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.327792 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.431210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.431268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.431281 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.431304 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.431316 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.534259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.534307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.534321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.534343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.534359 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.637387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.637464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.637483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.637509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.637533 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.740637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.740678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.740688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.740704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.740714 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.844143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.844191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.844202 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.844218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.844231 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.930079 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:17:15.674064678 +0000 UTC Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.937639 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:03 crc kubenswrapper[4780]: E0219 08:22:03.937948 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.945968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.946046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.946067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.946097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:03 crc kubenswrapper[4780]: I0219 08:22:03.946117 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:03Z","lastTransitionTime":"2026-02-19T08:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.053966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.054049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.054071 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.054099 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.054119 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.156755 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.156826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.156846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.156914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.156933 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.259889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.259938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.259951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.259971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.259986 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.362708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.362791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.362810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.362837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.362859 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.465363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.465697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.465765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.465847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.465913 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.569238 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.569321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.569352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.569382 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.569401 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.672883 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.672936 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.672951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.672972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.672987 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.776339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.776435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.776459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.776494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.776519 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.879766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.880278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.880443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.880593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.880738 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.930695 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:03:04.421841693 +0000 UTC Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.938280 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.938348 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:04 crc kubenswrapper[4780]: E0219 08:22:04.938490 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.938603 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:04 crc kubenswrapper[4780]: E0219 08:22:04.938672 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:04 crc kubenswrapper[4780]: E0219 08:22:04.938926 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.983496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.983553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.983565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.983584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:04 crc kubenswrapper[4780]: I0219 08:22:04.983601 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:04Z","lastTransitionTime":"2026-02-19T08:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.086884 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.086968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.086989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.087025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.087046 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.189500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.189555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.189568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.189586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.189599 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.292549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.292626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.292650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.292680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.292702 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.395035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.395494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.395638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.395766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.395896 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.499244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.499307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.499319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.499345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.499361 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.602630 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.602697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.602748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.602774 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.602792 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.705875 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.705946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.705964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.705991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.706011 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.808756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.808828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.808851 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.808883 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.808905 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.911900 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.912277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.912478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.912626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.912773 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:05Z","lastTransitionTime":"2026-02-19T08:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.931257 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:07:38.278495056 +0000 UTC Feb 19 08:22:05 crc kubenswrapper[4780]: I0219 08:22:05.937839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:05 crc kubenswrapper[4780]: E0219 08:22:05.938149 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.017035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.017109 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.017173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.017210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.017237 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.120204 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.120576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.120659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.120751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.120840 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.199678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.199970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.200040 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.200106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.200240 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.227071 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.233707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.233849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.233915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.233999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.234081 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.249030 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.253657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.253703 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.253713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.253733 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.253747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.265672 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.270700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.270951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.271119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.271451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.271612 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.293777 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.298272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.298327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.298343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.298370 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.298385 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.312079 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.312287 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.314855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.314900 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.314915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.314942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.314958 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.418447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.418504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.418520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.418541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.418554 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.521488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.521561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.521586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.521619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.521639 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.624523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.624581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.624600 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.624629 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.624650 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.728391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.728480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.728502 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.728532 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.728555 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.831777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.831852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.831879 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.831914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.831937 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.931884 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:42:02.459598657 +0000 UTC Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.934726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.934768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.934780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.934796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.934807 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:06Z","lastTransitionTime":"2026-02-19T08:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.938068 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.938094 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:06 crc kubenswrapper[4780]: I0219 08:22:06.938166 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.938241 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.938352 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:06 crc kubenswrapper[4780]: E0219 08:22:06.938413 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.038192 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.038240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.038252 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.038268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.038281 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.142168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.142261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.142301 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.142327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.142341 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.245118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.245161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.245177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.245197 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.245212 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.348357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.348422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.348436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.348456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.348471 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.451410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.451472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.451487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.451509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.451522 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.555201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.555250 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.555261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.555280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.555289 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.658853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.658924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.658942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.658986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.659017 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.762903 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.762976 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.763001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.763033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.763055 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.866002 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.866055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.866070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.866090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.866107 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.932069 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:06:32.756300899 +0000 UTC Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.952258 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:07 crc kubenswrapper[4780]: E0219 08:22:07.952488 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.966658 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.969544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.969595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.969616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.969645 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.969666 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:07Z","lastTransitionTime":"2026-02-19T08:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.984313 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:07 crc kubenswrapper[4780]: I0219 08:22:07.999808 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.020943 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.039786 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.065810 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.076233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.076439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.076557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.076721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.076876 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.090526 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.105346 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.125171 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.142601 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.164164 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.179713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.179783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.179795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.179813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.179823 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.190982 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.205154 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.223188 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.237211 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.256730 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.275107 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.281932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.281999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.282035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.282055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.282069 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.383878 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.383958 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.383992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.384028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.384049 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.486951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.486992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.487004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.487020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.487033 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.589876 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.589914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.589924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.589958 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.589970 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.692928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.692996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.693014 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.693045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.693064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.796579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.796672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.796697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.796737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.796765 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.899787 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.899858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.899870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.899892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.899907 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:08Z","lastTransitionTime":"2026-02-19T08:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.933170 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:55:11.110144584 +0000 UTC Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.937753 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.937788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:08 crc kubenswrapper[4780]: E0219 08:22:08.937922 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:08 crc kubenswrapper[4780]: I0219 08:22:08.937752 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:08 crc kubenswrapper[4780]: E0219 08:22:08.938266 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:08 crc kubenswrapper[4780]: E0219 08:22:08.938368 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.003307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.003520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.003643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.003776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.003965 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.107600 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.107769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.107864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.107957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.108035 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.211919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.212020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.212035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.212054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.212064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.314469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.314513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.314533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.314552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.314564 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.417090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.417185 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.417201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.417225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.417238 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.519398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.519470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.519482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.519505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.519524 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.622550 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.622599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.622612 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.622630 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.622643 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.726808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.726895 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.726919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.726952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.726977 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.831059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.831149 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.831169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.831201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.831218 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.934172 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:13:42.173763147 +0000 UTC Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.935095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.935156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.935171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.935191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.935204 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:09Z","lastTransitionTime":"2026-02-19T08:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:09 crc kubenswrapper[4780]: I0219 08:22:09.937508 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:09 crc kubenswrapper[4780]: E0219 08:22:09.937663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.037767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.037854 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.037872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.037901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.037918 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.141570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.141630 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.141651 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.141684 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.141711 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.244549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.244615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.244633 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.244657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.244676 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.348196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.348254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.348276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.348301 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.348320 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.451305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.451352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.451364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.451383 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.451396 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.554005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.554070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.554087 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.554110 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.554145 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.656799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.656867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.656887 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.656914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.656935 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.759826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.759880 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.759892 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.759912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.759927 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.863976 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.864031 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.864043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.864063 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.864079 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.934716 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:34:00.644929645 +0000 UTC Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.938153 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.938728 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.938763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:10 crc kubenswrapper[4780]: E0219 08:22:10.938907 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:10 crc kubenswrapper[4780]: E0219 08:22:10.938802 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:10 crc kubenswrapper[4780]: E0219 08:22:10.939089 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.939306 4780 scope.go:117] "RemoveContainer" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" Feb 19 08:22:10 crc kubenswrapper[4780]: E0219 08:22:10.939902 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.967654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.967698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.967712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.967731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:10 crc kubenswrapper[4780]: I0219 08:22:10.967744 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:10Z","lastTransitionTime":"2026-02-19T08:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.071058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.071103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.071112 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.071147 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.071159 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.174690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.174754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.174772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.174800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.174819 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.279908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.279964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.279980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.280005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.280023 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.383626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.383695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.383712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.383739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.383758 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.487084 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.487221 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.487243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.487270 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.487291 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.590422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.590487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.590513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.590549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.590597 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.693261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.693305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.693319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.693337 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.693351 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.777776 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:11 crc kubenswrapper[4780]: E0219 08:22:11.777940 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:22:11 crc kubenswrapper[4780]: E0219 08:22:11.778007 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:22:43.77798697 +0000 UTC m=+106.521644419 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.795770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.795815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.795828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.795847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.795863 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.898619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.898693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.898711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.898741 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.898762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:11Z","lastTransitionTime":"2026-02-19T08:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.935567 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:43:18.623250644 +0000 UTC Feb 19 08:22:11 crc kubenswrapper[4780]: I0219 08:22:11.938090 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:11 crc kubenswrapper[4780]: E0219 08:22:11.938395 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.001768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.001859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.001879 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.001908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.001928 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.105577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.105656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.105681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.105713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.105735 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.209693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.209759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.209773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.209793 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.209808 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.313546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.313621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.313638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.313675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.313695 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.417524 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.417605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.417627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.417719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.417797 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.521921 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.521988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.522007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.522041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.522060 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.625688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.625787 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.625800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.625818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.625830 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.729224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.729268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.729281 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.729299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.729312 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.831760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.831807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.831819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.831837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.831851 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.934460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.934545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.934567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.934599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.934622 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:12Z","lastTransitionTime":"2026-02-19T08:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.936698 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:33:20.058018173 +0000 UTC Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.938056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:12 crc kubenswrapper[4780]: E0219 08:22:12.938285 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.938550 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:12 crc kubenswrapper[4780]: E0219 08:22:12.938660 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:12 crc kubenswrapper[4780]: I0219 08:22:12.939071 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:12 crc kubenswrapper[4780]: E0219 08:22:12.939237 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.037421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.037477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.037495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.037518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.037535 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.140831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.140919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.140944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.140974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.140992 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.244392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.244503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.244528 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.244565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.244594 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.348073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.348121 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.348203 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.348240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.348267 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.452104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.452183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.452201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.452225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.452243 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.556493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.556592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.556621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.556698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.556725 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.660993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.661066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.661086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.661119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.661647 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.766288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.766399 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.766420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.766483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.766506 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.870691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.870772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.870798 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.870834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.870861 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.937511 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:19:23.762826477 +0000 UTC Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.937821 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:13 crc kubenswrapper[4780]: E0219 08:22:13.938006 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.974218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.974298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.974320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.974348 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:13 crc kubenswrapper[4780]: I0219 08:22:13.974366 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:13Z","lastTransitionTime":"2026-02-19T08:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.077671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.077751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.077770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.077798 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.077818 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.181595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.181771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.181801 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.181834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.181864 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.285788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.285860 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.285889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.285921 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.285945 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.389999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.390077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.390095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.390157 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.390178 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.493695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.493763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.493784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.493812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.493832 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.511193 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/0.log" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.511363 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" containerID="f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3" exitCode=1 Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.511452 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerDied","Data":"f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.512391 4780 scope.go:117] "RemoveContainer" containerID="f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.538473 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.564793 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.587439 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.598486 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.598546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.598567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.598594 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.598612 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.608738 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.630277 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.654010 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.704206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.704648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.704668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.704697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.704714 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.726960 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.747354 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.769890 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.784060 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.800263 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.807570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.807610 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.807623 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.807642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.807655 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.815873 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.830956 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.848736 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.865516 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.881259 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.899445 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.910862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.910915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.910927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.910946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.910958 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:14Z","lastTransitionTime":"2026-02-19T08:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.937484 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.937488 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.937635 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:14 crc kubenswrapper[4780]: E0219 08:22:14.937671 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:14 crc kubenswrapper[4780]: I0219 08:22:14.937745 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:40:31.493263987 +0000 UTC Feb 19 08:22:14 crc kubenswrapper[4780]: E0219 08:22:14.937827 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:14 crc kubenswrapper[4780]: E0219 08:22:14.937920 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.013577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.013617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.013655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.013678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.013690 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.117631 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.117722 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.117740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.117768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.117787 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.221228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.221277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.221298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.221323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.221342 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.324872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.324942 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.324960 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.324981 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.324995 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.428649 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.428714 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.428737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.428765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.428786 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.518281 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/0.log" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.518363 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerStarted","Data":"f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.531804 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.531872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.531893 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.531927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.531951 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.544326 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.565285 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.585929 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.606177 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.630327 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.634672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.634732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.634751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.634786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.634809 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.651364 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.672712 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.688179 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.703022 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.728389 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.738830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.738927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.738954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.739177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.739422 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.759021 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.771590 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.784243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.796328 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.821594 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.843716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.843808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.843837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.843870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.843896 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.845865 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.863970 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.937914 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:23:57.523070292 +0000 UTC Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.938378 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:15 crc kubenswrapper[4780]: E0219 08:22:15.938626 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.946574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.946649 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.946673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.946704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:15 crc kubenswrapper[4780]: I0219 08:22:15.946727 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:15Z","lastTransitionTime":"2026-02-19T08:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.049782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.049844 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.049862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.049888 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.049909 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.153091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.153194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.153225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.153264 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.153292 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.256762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.256813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.256831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.256858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.256877 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.360057 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.360156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.360177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.360205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.360227 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.463946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.464008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.464026 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.464053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.464073 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.546299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.546360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.546379 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.546407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.546430 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.568618 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.575287 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.575367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.575393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.575430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.575455 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.596450 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.602788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.602848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.602868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.602902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.602926 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.623369 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.629249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.629303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.629322 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.629347 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.629365 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.651327 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.657312 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.657365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.657387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.657414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.657431 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.679245 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.679574 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.681646 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.681695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.681711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.681733 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.681749 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.785266 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.785305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.785314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.785333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.785345 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.888363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.888411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.888420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.888436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.888447 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.937821 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.937910 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.937952 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.938008 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.938080 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:13:39.893385893 +0000 UTC Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.938195 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:16 crc kubenswrapper[4780]: E0219 08:22:16.938034 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.990774 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.990814 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.990826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.990843 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:16 crc kubenswrapper[4780]: I0219 08:22:16.990857 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:16Z","lastTransitionTime":"2026-02-19T08:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.093894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.093933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.093954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.093969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.093978 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.199683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.199763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.199788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.199819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.199842 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.304060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.304197 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.304224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.304249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.304266 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.408028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.408163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.408188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.408216 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.408237 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.512287 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.512364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.512382 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.512410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.512429 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.615719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.615805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.615824 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.615847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.615865 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.720054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.720158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.720182 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.720213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.720233 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.823206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.823277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.823295 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.823323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.823346 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.927363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.927435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.927460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.927494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.927518 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:17Z","lastTransitionTime":"2026-02-19T08:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.937208 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:17 crc kubenswrapper[4780]: E0219 08:22:17.937424 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.938254 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:43:22.950309532 +0000 UTC Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.958917 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:17 crc kubenswrapper[4780]: I0219 08:22:17.980949 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.006068 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.031764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.031847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.031866 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.031897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.031919 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.041352 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.058401 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.075947 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.093949 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.113508 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.133165 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.135504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.135574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.135593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.135622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.135640 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.149591 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.169768 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.190859 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.213728 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.236568 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.239966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.240008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.240026 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.240048 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.240064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.261973 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.286608 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.307301 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.343296 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.343350 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.343368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.343393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.343409 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.447033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.447173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.447205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.447246 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.447272 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.550709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.550763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.550796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.550818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.550834 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.655643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.655703 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.655717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.655740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.655756 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.759119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.759225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.759243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.759270 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.759288 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.862667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.862709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.862722 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.862740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.862753 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.937216 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.937332 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.937246 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:18 crc kubenswrapper[4780]: E0219 08:22:18.937443 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:18 crc kubenswrapper[4780]: E0219 08:22:18.937648 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:18 crc kubenswrapper[4780]: E0219 08:22:18.937792 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.938396 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:47:50.597798069 +0000 UTC Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.965696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.965732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.965740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.965758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:18 crc kubenswrapper[4780]: I0219 08:22:18.965770 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:18Z","lastTransitionTime":"2026-02-19T08:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.068357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.068443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.068464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.068492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.068514 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.172017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.172073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.172091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.172117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.172158 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.275860 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.275989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.276050 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.276089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.276165 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.380065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.380168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.380187 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.380219 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.380245 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.484311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.484391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.484410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.484439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.484459 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.587359 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.587439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.587456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.587482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.587498 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.690983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.691052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.691070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.691096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.691116 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.795042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.795118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.795161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.795194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.795229 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.898265 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.898341 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.898360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.898391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.898411 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:19Z","lastTransitionTime":"2026-02-19T08:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.938023 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:19 crc kubenswrapper[4780]: E0219 08:22:19.938283 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:19 crc kubenswrapper[4780]: I0219 08:22:19.938618 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:54:45.104157938 +0000 UTC Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.002271 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.002396 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.002423 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.002460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.002484 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.106889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.106976 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.106998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.107028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.107048 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.210915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.211004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.211027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.211060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.211082 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.314202 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.314257 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.314272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.314294 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.314307 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.418357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.418436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.418458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.418487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.418543 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.522695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.522762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.522783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.522812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.522832 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.626085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.626387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.626447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.626481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.626505 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.730718 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.730796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.730815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.730852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.730873 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.834533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.834586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.834607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.834632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.834659 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937200 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:20 crc kubenswrapper[4780]: E0219 08:22:20.937391 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937423 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937526 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937543 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:20Z","lastTransitionTime":"2026-02-19T08:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:20 crc kubenswrapper[4780]: E0219 08:22:20.937803 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.937865 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:20 crc kubenswrapper[4780]: E0219 08:22:20.937939 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:20 crc kubenswrapper[4780]: I0219 08:22:20.939716 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:16:44.774659629 +0000 UTC Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.040292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.040371 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.040390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.040416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.040441 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.143618 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.143695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.143713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.143744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.143761 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.250754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.250834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.250852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.250877 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.250902 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.354210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.354316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.354335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.354364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.354386 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.457622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.457677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.457695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.457716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.457733 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.560602 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.560680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.560706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.560738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.560762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.664412 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.664488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.664539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.664566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.664588 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.768682 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.768761 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.768784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.768812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.768838 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.872282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.872404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.872425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.872456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.872475 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.938246 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:21 crc kubenswrapper[4780]: E0219 08:22:21.938532 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.940155 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:38:12.496239291 +0000 UTC Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.975920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.975992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.976011 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.976036 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:21 crc kubenswrapper[4780]: I0219 08:22:21.976055 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:21Z","lastTransitionTime":"2026-02-19T08:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.079677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.079765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.079809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.079850 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.079874 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.183065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.183181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.183203 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.183234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.183256 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.286971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.287048 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.287065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.287091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.287194 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.391348 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.391441 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.391462 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.391488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.391506 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.495417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.495498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.495520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.495546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.495566 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.599453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.599557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.599575 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.599601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.599620 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.703959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.704054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.704075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.704104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.704153 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.808042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.808118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.808181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.808206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.808228 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911063 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911272 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911368 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911430 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:26.911388317 +0000 UTC m=+149.655045806 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911492 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911545 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911569 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911589 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911661 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911666 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:23:26.911632783 +0000 UTC m=+149.655290262 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911794 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:23:26.911770066 +0000 UTC m=+149.655427545 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911824 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.911865 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:22Z","lastTransitionTime":"2026-02-19T08:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.911879 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.912218 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:23:26.912099055 +0000 UTC m=+149.655756544 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.937345 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.937422 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.937545 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.937613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.937751 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:22 crc kubenswrapper[4780]: E0219 08:22:22.937805 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.938808 4780 scope.go:117] "RemoveContainer" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" Feb 19 08:22:22 crc kubenswrapper[4780]: I0219 08:22:22.940380 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:01:02.652318072 +0000 UTC Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.012668 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:23 crc kubenswrapper[4780]: E0219 08:22:23.012909 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:22:23 crc kubenswrapper[4780]: E0219 08:22:23.012951 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:22:23 crc kubenswrapper[4780]: E0219 08:22:23.012976 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:22:23 crc kubenswrapper[4780]: E0219 08:22:23.013054 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:23:27.013028551 +0000 UTC m=+149.756686040 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.014818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.014869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.014888 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.014914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.014934 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.118346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.118406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.118426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.118451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.118468 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.221965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.222044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.222057 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.222084 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.222101 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.324728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.324774 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.324788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.324806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.324817 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.429533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.429593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.429615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.429645 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.429667 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.532623 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.532698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.532723 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.532753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.532776 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.551279 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/2.log" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.555193 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.555854 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.578554 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.602182 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.621177 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.632231 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.635293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.635360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.635376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.635396 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.635737 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.647590 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.659139 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.679929 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.700243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.716485 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.738701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.738686 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.738787 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.738971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.739006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.739021 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.756533 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.770344 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.786339 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.804672 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.820698 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.839599 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.841351 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.841389 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.841400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.841417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.841427 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.853742 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.937564 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:23 crc kubenswrapper[4780]: E0219 08:22:23.937741 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.940531 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:09:51.734692116 +0000 UTC Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.943332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.943396 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.943405 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.943422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:23 crc kubenswrapper[4780]: I0219 08:22:23.943433 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:23Z","lastTransitionTime":"2026-02-19T08:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.046835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.046897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.046911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.046931 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.046945 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.149472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.149518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.149527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.149548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.149561 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.252529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.252582 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.252594 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.252616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.252631 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.356217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.356290 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.356312 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.356342 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.356362 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.460357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.460431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.460453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.460490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.460510 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.566907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.566982 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.567017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.567049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.567076 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.567639 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/3.log" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.570060 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/2.log" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.574896 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" exitCode=1 Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.574983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.575049 4780 scope.go:117] "RemoveContainer" containerID="340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.577378 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:22:24 crc kubenswrapper[4780]: E0219 08:22:24.577903 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.602724 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.626907 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.649184 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.670790 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.672203 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.672456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.672636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.672891 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.673070 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.695274 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.716044 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.739799 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.760360 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.777082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.777159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.777175 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.777199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.777212 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.781562 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.802093 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.831642 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e1b9beef660cdf2fa1f4d6d577798d840c6bceddb657e64063afab47c6453\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:21:56Z\\\",\\\"message\\\":\\\"ping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 08:21:56.041517 6442 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041684 6442 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 08:21:56.041712 6442 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 08:21:56.041829 6442 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:21:56.041858 6442 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:21:56.041865 6442 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:21:56.041899 6442 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:21:56.041911 6442 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:21:56.041923 6442 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:21:56.041950 6442 factory.go:656] Stopping watch factory\\\\nI0219 08:21:56.041982 6442 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:24Z\\\",\\\"message\\\":\\\" zone local for Pod openshift-multus/multus-additional-cni-plugins-mlb49 in node crc\\\\nI0219 08:22:24.081692 6860 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0219 08:22:24.081699 6860 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-mlb49 after 0 failed attempt(s)\\\\nI0219 08:22:24.081704 6860 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-mlb49\\\\nI0219 08:22:24.081716 6860 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:22:24.081573 6860 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0219 08:22:24.081728 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.851041 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.867792 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.881083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.881147 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.881159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.881178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.881191 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.887662 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.908781 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.929567 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.937372 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:24 crc kubenswrapper[4780]: E0219 08:22:24.937503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.937593 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:24 crc kubenswrapper[4780]: E0219 08:22:24.938269 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.939027 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:24 crc kubenswrapper[4780]: E0219 08:22:24.939242 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.941308 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:54:08.358112915 +0000 UTC Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.948149 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:24Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.957166 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.983509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.983553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.983568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.983597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:24 crc kubenswrapper[4780]: I0219 08:22:24.983613 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:24Z","lastTransitionTime":"2026-02-19T08:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.086446 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.086492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.086505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.086523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.086536 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.189666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.189719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.189732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.189759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.189775 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.292206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.292260 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.292270 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.292289 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.292301 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.395354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.395415 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.395427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.395443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.395453 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.498820 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.498876 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.498889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.498910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.498923 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.583050 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/3.log" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.588527 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:22:25 crc kubenswrapper[4780]: E0219 08:22:25.588753 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.602365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.602451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.602479 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.602510 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.602535 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.613782 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920aa359-8647-440a-842e-066313c39414\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b0553fa5c0919b2d674c9e01aed403ea5bcb1d8f7d141d21b59f8739c861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmgzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw5ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.631051 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dc795cfbc5b318a7483f146686967998793f9298d675aa20370ad82d077ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a84dea81d080c84d3070cd6e29bbb3d311dab9cad0036617f1be34214246cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.647548 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e66e3868570190101bf7c90ea0e813f30dadc9c5efdde549f1aa0dad8251d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.666586 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33abc105-8bb0-4564-a24f-210e18813bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:21:12Z\\\",\\\"message\\\":\\\"W0219 08:21:01.437597 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 08:21:01.438295 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771489261 cert, and key in /tmp/serving-cert-923007577/serving-signer.crt, /tmp/serving-cert-923007577/serving-signer.key\\\\nI0219 08:21:02.178416 1 observer_polling.go:159] Starting file observer\\\\nW0219 08:21:02.181648 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 08:21:02.181875 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:21:02.192283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-923007577/tls.crt::/tmp/serving-cert-923007577/tls.key\\\\\\\"\\\\nF0219 08:21:12.486757 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.684778 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e4ebad5ea1f97c31e5994ee0ebdf58cfcedb578b9a07c9cc1216b6a80600b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.701230 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.705601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.705666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.705679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.705698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.705723 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.715690 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79a5c642-7764-4c09-aae6-3cf33b0534af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b145b4744466089e0b4f31e42364fa3d46ed514232e0828495372788bc51febc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d23d801753c31807b68c5ce7b547416874b610b2dc9ae91b5140be53f8f215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://980bb6723f2c977426eefb83d215c01a884a318b4325db5a4ccf2c176379e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75252c79fcbb6dc27142480710cbd5edc90dbb07d687094bc93ba52f7fe162dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.735681 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1692952-9542-44f6-8e88-489c2da62b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3961fe167f58c7c6b206abfb35e2657c2e1de558b878c7d632a8fef72436fc8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe25ace38473b2f174716891d7aab95b15f3e824814188da1c16599c5844e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa9d3455b17ea7a063f92a522e13bb1129a90e8679566cdebcadb0de0a93bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7baf032a7666624d21c3854d46b4c4f08e577080290593b8d0e4545e79c133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a4da2cab230efb3c4fd64ba2fded66ea96728b6d66172966998f262bfc9610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f9475292313709ed188aad4cb0ee950e08bfe152fadf5f9d30955a397bb142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f9475292313709ed188aad4cb0ee950e08bfe152fadf5f9d30955a397bb142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:20:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade8b416067c47a7848c5358e6a820f479613db1ce690f9e4c9d8ad5b08f947b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade8b416067c47a7848c5358e6a820f479613db1ce690f9e4c9d8ad5b08f947b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4e458bd0f3903df6d8c927865d49320c4d3d7b917d69fc534b0a6a0088c74323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e458bd0f3903df6d8c927865d49320c4d3d7b917d69fc534b0a6a0088c74323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.753269 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.772318 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgjfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:13Z\\\",\\\"message\\\":\\\"2026-02-19T08:21:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348\\\\n2026-02-19T08:21:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e0a3ae5f-bb19-4edb-a5c4-b6c59711e348 to /host/opt/cni/bin/\\\\n2026-02-19T08:21:28Z [verbose] multus-daemon started\\\\n2026-02-19T08:21:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:22:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbdfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgjfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.798415 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e649075-d5ae-4d3a-b0af-b8f7f7784035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:22:24Z\\\",\\\"message\\\":\\\" zone local for Pod openshift-multus/multus-additional-cni-plugins-mlb49 in node crc\\\\nI0219 08:22:24.081692 6860 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0219 08:22:24.081699 6860 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-mlb49 after 0 failed attempt(s)\\\\nI0219 08:22:24.081704 6860 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-mlb49\\\\nI0219 08:22:24.081716 6860 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:22:24.081573 6860 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0219 08:22:24.081728 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:22:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96p2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-skpt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.809626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.809708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.809737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.809759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.809773 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.813001 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-59w6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea29fbcd-2cce-4482-87e2-2af59c52beed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d305a67942416487606c8c68671c1b474842bcdd78a11f71858682900fe1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5ddl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-59w6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.829985 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec27bcf-beb5-4439-8572-997ef30fc0ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91d0c05a963c18598096e390ac25c2a5e250495f6c1f9aa404c955eb055cf5c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f03f06bc53465b775c6eff804a05e52851d7b150ff4268f6527ac386721d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpdzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kljjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.843560 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jg765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvhsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jg765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.859824 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8d5e9f2-1bfd-4b46-b3e7-340bb9027cde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:20:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6469ccc6a554070c179dc60c9917ecb4cdb8820b9871fcccdd0cef7827611a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14d2a9d77a7bb5586a98cbbf1ed45c1bfe078653ae3eed54641cd9ea70a023f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:20:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa09b92414c8e2eeeecd42862caab1dcd7d6af79f663fae6233f386ace1d0f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:20:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.874702 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.889090 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cs47t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61eb1a9-489e-42f7-811c-36eb08e442d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa33835a37957070a12a775382d95e2fdf1ac0860b787f6f74e3be576f9ef437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzfms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cs47t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.908100 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mlb49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a293d184-7162-4977-8158-1b459d68981b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb47b9d32022ae003cee6a091eb1ee9f273c10fde7150054b4f06ee8bc55a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee5cb81aec431b2d364a4fb905c842a0cb8f58915fea9c32d62e2bac6132e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e78a6136115142df61535718b617c7da6ef7d73e423908fd4f039dffe8f745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4990b21ae1c0ce2003c5cc88bccd7f659b9e2ec01a43ba8f5f48117e0122f3ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452e14d8f2fa35ac4693c298626c994d3a91f1e96ebe3b3be20e91681ef6125e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd1065e2b8ab94349cb5c26135da19cf794816dbbd1b8c3394409feec9d183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6be98fc8ba103e798192c69a05cce9fc7b7e2b14af9cc8934678f1f2d091c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:21:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:21:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrrcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:21:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mlb49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:25Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.913306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.913359 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.913370 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.913404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.913418 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:25Z","lastTransitionTime":"2026-02-19T08:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.938193 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:25 crc kubenswrapper[4780]: E0219 08:22:25.938375 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:25 crc kubenswrapper[4780]: I0219 08:22:25.941549 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:27:03.898637435 +0000 UTC Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.016392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.016463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.016476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.016519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.016532 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.122224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.122276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.122288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.122306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.122320 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.226410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.226450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.226463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.226485 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.226497 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.330566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.330659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.330674 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.330694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.330764 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.434431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.434516 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.434536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.434562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.434585 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.538845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.538937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.538964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.539013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.539042 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.643616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.643683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.643703 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.643729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.643748 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.740680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.740759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.740781 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.740808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.740824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.763999 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.771187 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.771256 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.771276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.771303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.771322 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.803372 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.810307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.810390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.810411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.810444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.810463 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.832740 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.837327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.837391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.837410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.837436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.837456 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.855921 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.860909 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.860989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.861008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.861040 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.861063 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.878580 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b0e95c9e-0cc6-4df2-aa05-74b171e9d33d\\\",\\\"systemUUID\\\":\\\"acb2587c-96a2-4752-8cc0-31f3ec66dc5a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:22:26Z is after 2025-08-24T17:21:41Z" Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.878745 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.881103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.881268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.881292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.881327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.881353 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.938319 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.938345 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.938461 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.938640 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.938730 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:26 crc kubenswrapper[4780]: E0219 08:22:26.938867 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.942078 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:01:26.603644329 +0000 UTC Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.984084 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.984232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.984261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.984293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:26 crc kubenswrapper[4780]: I0219 08:22:26.984315 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:26Z","lastTransitionTime":"2026-02-19T08:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.087300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.087373 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.087394 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.087422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.087443 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.190991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.191072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.191090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.191119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.191197 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.293624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.294204 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.294219 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.294247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.294268 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.398008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.398077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.398091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.398115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.398156 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.500999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.501049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.501060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.501080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.501091 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.604708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.604763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.604779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.604799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.604817 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.708445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.708499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.708513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.708533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.708549 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.812205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.812282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.812307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.812339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.812367 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.916026 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.916103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.916174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.916216 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.916241 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:27Z","lastTransitionTime":"2026-02-19T08:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.937601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:27 crc kubenswrapper[4780]: E0219 08:22:27.937952 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.942508 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:11:04.250446568 +0000 UTC Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.955186 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 08:22:27 crc kubenswrapper[4780]: I0219 08:22:27.978114 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mlb49" podStartSLOduration=62.97808284 podStartE2EDuration="1m2.97808284s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:27.977887465 +0000 UTC m=+90.721544974" watchObservedRunningTime="2026-02-19 08:22:27.97808284 +0000 UTC m=+90.721740319" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.027460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.027523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.027536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.027560 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.027580 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.040428 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-59w6b" podStartSLOduration=64.040389307 podStartE2EDuration="1m4.040389307s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.038879689 +0000 UTC m=+90.782537178" watchObservedRunningTime="2026-02-19 08:22:28.040389307 +0000 UTC m=+90.784046796" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.055568 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kljjx" podStartSLOduration=63.055533653 podStartE2EDuration="1m3.055533653s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.05541722 +0000 UTC m=+90.799074719" watchObservedRunningTime="2026-02-19 08:22:28.055533653 +0000 UTC m=+90.799191132" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.096147 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.09610474 podStartE2EDuration="1m9.09610474s" podCreationTimestamp="2026-02-19 08:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.095114036 +0000 UTC m=+90.838771515" watchObservedRunningTime="2026-02-19 08:22:28.09610474 +0000 UTC m=+90.839762199" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.130457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.130494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.130506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.130527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.130541 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.145547 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cs47t" podStartSLOduration=64.145523977 podStartE2EDuration="1m4.145523977s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.128370151 +0000 UTC m=+90.872027620" watchObservedRunningTime="2026-02-19 08:22:28.145523977 +0000 UTC m=+90.889181436" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.164916 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podStartSLOduration=63.164893308 podStartE2EDuration="1m3.164893308s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.146365408 +0000 UTC m=+90.890022877" watchObservedRunningTime="2026-02-19 08:22:28.164893308 +0000 UTC m=+90.908550767" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.233200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.233230 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.233238 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.233252 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.233261 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.236297 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.236284461 podStartE2EDuration="1m9.236284461s" podCreationTimestamp="2026-02-19 08:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.217832353 +0000 UTC m=+90.961489802" watchObservedRunningTime="2026-02-19 08:22:28.236284461 +0000 UTC m=+90.979941910" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.253771 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jgjfm" podStartSLOduration=63.253753205 podStartE2EDuration="1m3.253753205s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.253339475 +0000 UTC m=+90.996996934" watchObservedRunningTime="2026-02-19 08:22:28.253753205 +0000 UTC m=+90.997410644" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.271695 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.27166761 podStartE2EDuration="35.27166761s" podCreationTimestamp="2026-02-19 08:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.270713546 +0000 UTC m=+91.014371025" watchObservedRunningTime="2026-02-19 08:22:28.27166761 +0000 UTC m=+91.015325059" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.317156 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.3171106980000005 podStartE2EDuration="4.317110698s" podCreationTimestamp="2026-02-19 08:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:28.30469487 +0000 UTC m=+91.048352319" watchObservedRunningTime="2026-02-19 08:22:28.317110698 +0000 UTC m=+91.060768147" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.335531 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.335606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.335632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.335670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.335695 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.439589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.439670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.439690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.439714 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.439732 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.541835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.542282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.542392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.542523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.542629 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.645298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.645331 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.645340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.645358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.645368 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.748499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.748593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.748620 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.748655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.748681 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.852246 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.852317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.852338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.852386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.852406 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.937659 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:28 crc kubenswrapper[4780]: E0219 08:22:28.937894 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.938050 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.938531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:28 crc kubenswrapper[4780]: E0219 08:22:28.938884 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:28 crc kubenswrapper[4780]: E0219 08:22:28.938990 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.942791 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:53:30.840977056 +0000 UTC Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.955667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.955751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.955772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.955799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:28 crc kubenswrapper[4780]: I0219 08:22:28.955818 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:28Z","lastTransitionTime":"2026-02-19T08:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.060173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.060258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.060286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.060319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.060340 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.162966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.163042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.163076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.163118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.163179 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.267451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.267517 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.267536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.267571 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.267594 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.371878 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.371956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.371979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.372009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.372034 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.476117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.476240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.476258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.476285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.476308 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.580321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.580365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.580378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.580397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.580410 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.683650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.683701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.683715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.683735 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.683747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.787163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.787248 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.787262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.787280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.787297 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.890060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.890107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.890118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.890147 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.890157 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.938284 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:29 crc kubenswrapper[4780]: E0219 08:22:29.938490 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.943010 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:43:17.638192475 +0000 UTC Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.994444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.994551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.994617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.994645 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:29 crc kubenswrapper[4780]: I0219 08:22:29.994663 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:29Z","lastTransitionTime":"2026-02-19T08:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.099055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.099235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.099274 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.099332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.099367 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.204477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.204549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.204575 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.204605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.204627 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.307410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.307463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.307475 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.307495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.307508 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.411064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.411115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.411140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.411169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.411180 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.514849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.514896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.514912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.514934 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.514948 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.618171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.618236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.618258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.618280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.618296 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.721545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.721621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.721642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.721668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.721693 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.825565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.825639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.825659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.825694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.825715 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.935017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.935088 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.935108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.935169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.935189 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:30Z","lastTransitionTime":"2026-02-19T08:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.937291 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.937388 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:30 crc kubenswrapper[4780]: E0219 08:22:30.937476 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:30 crc kubenswrapper[4780]: E0219 08:22:30.937582 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.937623 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:30 crc kubenswrapper[4780]: E0219 08:22:30.937781 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:30 crc kubenswrapper[4780]: I0219 08:22:30.943385 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:17:55.673546055 +0000 UTC Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.038580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.038697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.038723 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.038756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.038776 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.142049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.142174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.142213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.142247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.142268 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.245673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.245748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.245765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.245792 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.245811 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.349646 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.349749 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.349767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.349799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.349816 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.453424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.453506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.453544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.453576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.453599 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.557657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.557751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.557773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.557806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.557828 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.661869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.661948 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.661967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.661998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.662025 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.765418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.765488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.765504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.765525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.765540 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.869215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.869271 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.869283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.869303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.869320 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.938175 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:31 crc kubenswrapper[4780]: E0219 08:22:31.938443 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.943542 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:46:14.299264097 +0000 UTC Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.971727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.971803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.971825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.971856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:31 crc kubenswrapper[4780]: I0219 08:22:31.971875 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:31Z","lastTransitionTime":"2026-02-19T08:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.075875 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.075945 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.075963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.075993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.076012 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.179479 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.179560 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.179583 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.179611 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.179634 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.283363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.283453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.283472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.283500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.283519 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.387493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.387570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.387588 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.387622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.387641 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.491424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.491489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.491555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.491601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.491628 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.594725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.594793 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.594818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.594856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.594880 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.697956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.698034 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.698055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.698085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.698108 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.801528 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.801618 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.801638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.801666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.801688 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.905899 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.905977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.905995 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.906031 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.906054 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:32Z","lastTransitionTime":"2026-02-19T08:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.937283 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.937334 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.937388 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:32 crc kubenswrapper[4780]: E0219 08:22:32.937564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:32 crc kubenswrapper[4780]: E0219 08:22:32.937708 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:32 crc kubenswrapper[4780]: E0219 08:22:32.937815 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:32 crc kubenswrapper[4780]: I0219 08:22:32.944497 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:24:03.677671311 +0000 UTC Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.009450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.009515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.009533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.009559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.009576 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.112930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.112990 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.113010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.113037 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.113057 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.216190 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.216265 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.216288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.216320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.216347 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.319339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.319410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.319437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.319480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.319506 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.423839 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.423907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.423927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.423953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.423973 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.528527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.528597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.528615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.528641 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.528661 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.632171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.632248 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.632273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.632307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.632335 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.736392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.736480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.736507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.736544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.736573 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.839664 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.839754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.839780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.839812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.839836 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.937815 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:33 crc kubenswrapper[4780]: E0219 08:22:33.938104 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.942389 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.942437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.942448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.942461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.942471 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:33Z","lastTransitionTime":"2026-02-19T08:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:33 crc kubenswrapper[4780]: I0219 08:22:33.945552 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:45:37.708192606 +0000 UTC Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.046247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.046418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.046443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.046470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.046490 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.149229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.149300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.149324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.149388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.149413 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.252809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.252876 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.252900 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.252932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.252952 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.356161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.356549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.356568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.356598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.356619 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.462758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.462832 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.462852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.462880 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.462900 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.566327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.566404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.566427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.566457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.566484 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.669657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.669721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.669731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.669748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.669760 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.772873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.772954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.772978 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.773012 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.773037 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.876602 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.876654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.876672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.876694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.876711 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.937341 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.937390 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.937412 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:34 crc kubenswrapper[4780]: E0219 08:22:34.937600 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:34 crc kubenswrapper[4780]: E0219 08:22:34.937745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:34 crc kubenswrapper[4780]: E0219 08:22:34.938504 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.945732 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 04:27:53.460888209 +0000 UTC Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.979933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.979994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.980030 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.980064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:34 crc kubenswrapper[4780]: I0219 08:22:34.980086 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:34Z","lastTransitionTime":"2026-02-19T08:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.084172 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.084255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.084283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.084316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.084340 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.188527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.188617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.188651 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.188683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.188704 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.291901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.291974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.291992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.292021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.292040 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.395966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.396043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.396072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.396105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.396177 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.499353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.499416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.499434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.499459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.499479 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.603165 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.603232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.603254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.603280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.603298 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.706832 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.706901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.706919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.706946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.706967 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.810332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.810414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.810437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.810468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.810488 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.914387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.914476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.914503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.914539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.914563 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:35Z","lastTransitionTime":"2026-02-19T08:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.937983 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:35 crc kubenswrapper[4780]: E0219 08:22:35.938280 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:35 crc kubenswrapper[4780]: I0219 08:22:35.946106 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:55:45.457736945 +0000 UTC Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.017476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.017548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.017567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.017594 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.017613 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.121102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.121196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.121215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.121242 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.121260 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.224186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.224244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.224262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.224288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.224305 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.328067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.328179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.328199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.328229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.328248 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.432042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.432166 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.432193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.432226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.432248 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.536386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.536468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.536487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.536518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.536574 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.640599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.640695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.640713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.640740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.640761 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.743737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.743799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.743820 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.743849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.743882 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.847847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.847939 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.847959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.847993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.848014 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.937835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.937920 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.937957 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:36 crc kubenswrapper[4780]: E0219 08:22:36.938055 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:36 crc kubenswrapper[4780]: E0219 08:22:36.938239 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:36 crc kubenswrapper[4780]: E0219 08:22:36.938559 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.946299 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:18:46.029061323 +0000 UTC Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.951115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.951209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.951232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.951271 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:36 crc kubenswrapper[4780]: I0219 08:22:36.951292 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:36Z","lastTransitionTime":"2026-02-19T08:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.054269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.054344 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.054368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.054402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.054423 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:37Z","lastTransitionTime":"2026-02-19T08:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.127195 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.127275 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.127297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.127331 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.127350 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:37Z","lastTransitionTime":"2026-02-19T08:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.158404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.158483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.158505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.158535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.158560 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:22:37Z","lastTransitionTime":"2026-02-19T08:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.213000 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq"] Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.213656 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.218068 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.218071 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.218395 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.218955 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.255191 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=10.255164909 podStartE2EDuration="10.255164909s" podCreationTimestamp="2026-02-19 08:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:37.253617981 +0000 UTC m=+99.997275480" watchObservedRunningTime="2026-02-19 08:22:37.255164909 +0000 UTC m=+99.998822368" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.290747 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.290841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.290882 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.290921 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.291010 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392112 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392238 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392283 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.392386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.393097 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.401382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.422896 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pj4xq\" (UID: \"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.536495 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.639196 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" event={"ID":"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6","Type":"ContainerStarted","Data":"b6f66d6736a495d84a6542bb48ca39f9f7c9354d7c2e738a0437217c677a1481"} Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.937654 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:37 crc kubenswrapper[4780]: E0219 08:22:37.939575 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.947560 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:28:30.234872715 +0000 UTC Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.947642 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 08:22:37 crc kubenswrapper[4780]: I0219 08:22:37.960734 4780 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.645201 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" event={"ID":"b7b3dbe4-edb4-4dac-95e8-e7cb2d4e30a6","Type":"ContainerStarted","Data":"91a6079f90174707f58d05279ad8c85db81ecb6868995f77b00bac23cb05d26c"} Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.667411 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pj4xq" podStartSLOduration=73.667392007 podStartE2EDuration="1m13.667392007s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:38.666262739 +0000 UTC m=+101.409920198" watchObservedRunningTime="2026-02-19 08:22:38.667392007 +0000 UTC m=+101.411049466" Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.937622 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.937692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.937644 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:38 crc kubenswrapper[4780]: E0219 08:22:38.937820 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:38 crc kubenswrapper[4780]: E0219 08:22:38.937995 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:38 crc kubenswrapper[4780]: E0219 08:22:38.938613 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:38 crc kubenswrapper[4780]: I0219 08:22:38.939269 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:22:38 crc kubenswrapper[4780]: E0219 08:22:38.939615 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:22:39 crc kubenswrapper[4780]: I0219 08:22:39.937842 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:39 crc kubenswrapper[4780]: E0219 08:22:39.938091 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:40 crc kubenswrapper[4780]: I0219 08:22:40.937433 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:40 crc kubenswrapper[4780]: I0219 08:22:40.937519 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:40 crc kubenswrapper[4780]: I0219 08:22:40.937588 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:40 crc kubenswrapper[4780]: E0219 08:22:40.937716 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:40 crc kubenswrapper[4780]: E0219 08:22:40.938029 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:40 crc kubenswrapper[4780]: E0219 08:22:40.938534 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:41 crc kubenswrapper[4780]: I0219 08:22:41.937911 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:41 crc kubenswrapper[4780]: E0219 08:22:41.938096 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:42 crc kubenswrapper[4780]: I0219 08:22:42.937827 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:42 crc kubenswrapper[4780]: I0219 08:22:42.937978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:42 crc kubenswrapper[4780]: I0219 08:22:42.938069 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:42 crc kubenswrapper[4780]: E0219 08:22:42.938060 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:42 crc kubenswrapper[4780]: E0219 08:22:42.938202 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:42 crc kubenswrapper[4780]: E0219 08:22:42.938400 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:43 crc kubenswrapper[4780]: I0219 08:22:43.876878 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:43 crc kubenswrapper[4780]: E0219 08:22:43.877255 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:22:43 crc kubenswrapper[4780]: E0219 08:22:43.877419 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs podName:d1002d5b-b8b1-4175-9e36-9fbea7a1c060 nodeName:}" failed. No retries permitted until 2026-02-19 08:23:47.877378437 +0000 UTC m=+170.621035916 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs") pod "network-metrics-daemon-jg765" (UID: "d1002d5b-b8b1-4175-9e36-9fbea7a1c060") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:22:43 crc kubenswrapper[4780]: I0219 08:22:43.938112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:43 crc kubenswrapper[4780]: E0219 08:22:43.938379 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:44 crc kubenswrapper[4780]: I0219 08:22:44.937780 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:44 crc kubenswrapper[4780]: E0219 08:22:44.938568 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:44 crc kubenswrapper[4780]: I0219 08:22:44.939239 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:44 crc kubenswrapper[4780]: E0219 08:22:44.939360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:44 crc kubenswrapper[4780]: I0219 08:22:44.939535 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:44 crc kubenswrapper[4780]: E0219 08:22:44.939651 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:45 crc kubenswrapper[4780]: I0219 08:22:45.937606 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:45 crc kubenswrapper[4780]: E0219 08:22:45.938302 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:46 crc kubenswrapper[4780]: I0219 08:22:46.937754 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:46 crc kubenswrapper[4780]: I0219 08:22:46.937772 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:46 crc kubenswrapper[4780]: I0219 08:22:46.938102 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:46 crc kubenswrapper[4780]: E0219 08:22:46.939024 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:46 crc kubenswrapper[4780]: E0219 08:22:46.938830 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:46 crc kubenswrapper[4780]: E0219 08:22:46.939291 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:47 crc kubenswrapper[4780]: I0219 08:22:47.938257 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:47 crc kubenswrapper[4780]: E0219 08:22:47.939667 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:48 crc kubenswrapper[4780]: I0219 08:22:48.937412 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:48 crc kubenswrapper[4780]: I0219 08:22:48.937478 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:48 crc kubenswrapper[4780]: I0219 08:22:48.937551 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:48 crc kubenswrapper[4780]: E0219 08:22:48.937666 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:48 crc kubenswrapper[4780]: E0219 08:22:48.937856 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:48 crc kubenswrapper[4780]: E0219 08:22:48.937998 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:49 crc kubenswrapper[4780]: I0219 08:22:49.937484 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:49 crc kubenswrapper[4780]: E0219 08:22:49.937690 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:50 crc kubenswrapper[4780]: I0219 08:22:50.937982 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:50 crc kubenswrapper[4780]: I0219 08:22:50.938054 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:50 crc kubenswrapper[4780]: I0219 08:22:50.938154 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:50 crc kubenswrapper[4780]: E0219 08:22:50.938432 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:50 crc kubenswrapper[4780]: E0219 08:22:50.938553 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:50 crc kubenswrapper[4780]: E0219 08:22:50.938633 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:51 crc kubenswrapper[4780]: I0219 08:22:51.937758 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:51 crc kubenswrapper[4780]: E0219 08:22:51.937956 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:51 crc kubenswrapper[4780]: I0219 08:22:51.939658 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:22:51 crc kubenswrapper[4780]: E0219 08:22:51.940013 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:22:52 crc kubenswrapper[4780]: I0219 08:22:52.937536 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:52 crc kubenswrapper[4780]: I0219 08:22:52.937593 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:52 crc kubenswrapper[4780]: I0219 08:22:52.937617 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:52 crc kubenswrapper[4780]: E0219 08:22:52.938256 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:52 crc kubenswrapper[4780]: E0219 08:22:52.938371 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:52 crc kubenswrapper[4780]: E0219 08:22:52.938767 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:53 crc kubenswrapper[4780]: I0219 08:22:53.937797 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:53 crc kubenswrapper[4780]: E0219 08:22:53.938620 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:54 crc kubenswrapper[4780]: I0219 08:22:54.937927 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:54 crc kubenswrapper[4780]: I0219 08:22:54.937942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:54 crc kubenswrapper[4780]: I0219 08:22:54.938023 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:54 crc kubenswrapper[4780]: E0219 08:22:54.939475 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:54 crc kubenswrapper[4780]: E0219 08:22:54.939573 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:54 crc kubenswrapper[4780]: E0219 08:22:54.939785 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:55 crc kubenswrapper[4780]: I0219 08:22:55.937486 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:55 crc kubenswrapper[4780]: E0219 08:22:55.937699 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:56 crc kubenswrapper[4780]: I0219 08:22:56.937868 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:56 crc kubenswrapper[4780]: I0219 08:22:56.937983 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:56 crc kubenswrapper[4780]: I0219 08:22:56.937881 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:56 crc kubenswrapper[4780]: E0219 08:22:56.938075 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:56 crc kubenswrapper[4780]: E0219 08:22:56.938311 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:56 crc kubenswrapper[4780]: E0219 08:22:56.938488 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:57 crc kubenswrapper[4780]: E0219 08:22:57.909916 4780 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 08:22:57 crc kubenswrapper[4780]: I0219 08:22:57.937808 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:57 crc kubenswrapper[4780]: E0219 08:22:57.939855 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:22:58 crc kubenswrapper[4780]: E0219 08:22:58.083728 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:22:58 crc kubenswrapper[4780]: I0219 08:22:58.937896 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:22:58 crc kubenswrapper[4780]: I0219 08:22:58.938008 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:22:58 crc kubenswrapper[4780]: E0219 08:22:58.938064 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:22:58 crc kubenswrapper[4780]: E0219 08:22:58.938273 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:22:58 crc kubenswrapper[4780]: I0219 08:22:58.938742 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:22:58 crc kubenswrapper[4780]: E0219 08:22:58.939007 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:22:59 crc kubenswrapper[4780]: I0219 08:22:59.937342 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:22:59 crc kubenswrapper[4780]: E0219 08:22:59.937618 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.737972 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/1.log" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.738870 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/0.log" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.738961 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" containerID="f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3" exitCode=1 Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.739014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerDied","Data":"f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3"} Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.739073 4780 scope.go:117] "RemoveContainer" containerID="f908947803a153f2c8f679d3111e1c2cba0b883a088fcd7c2eb11201d6f0b2d3" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.739760 4780 scope.go:117] "RemoveContainer" containerID="f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3" Feb 19 08:23:00 crc kubenswrapper[4780]: E0219 08:23:00.740505 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jgjfm_openshift-multus(c3eeec30-c76f-4ae2-9384-ebd13ac5eed5)\"" pod="openshift-multus/multus-jgjfm" podUID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.938011 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.938108 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:00 crc kubenswrapper[4780]: I0219 08:23:00.938107 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:00 crc kubenswrapper[4780]: E0219 08:23:00.938209 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:00 crc kubenswrapper[4780]: E0219 08:23:00.938546 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:00 crc kubenswrapper[4780]: E0219 08:23:00.938717 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:01 crc kubenswrapper[4780]: I0219 08:23:01.744898 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/1.log" Feb 19 08:23:01 crc kubenswrapper[4780]: I0219 08:23:01.937669 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:01 crc kubenswrapper[4780]: E0219 08:23:01.938001 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:02 crc kubenswrapper[4780]: I0219 08:23:02.937902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:02 crc kubenswrapper[4780]: I0219 08:23:02.937902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:02 crc kubenswrapper[4780]: E0219 08:23:02.938690 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:02 crc kubenswrapper[4780]: I0219 08:23:02.937966 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:02 crc kubenswrapper[4780]: E0219 08:23:02.938804 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:02 crc kubenswrapper[4780]: E0219 08:23:02.938976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:03 crc kubenswrapper[4780]: E0219 08:23:03.086038 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:23:03 crc kubenswrapper[4780]: I0219 08:23:03.937782 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:03 crc kubenswrapper[4780]: E0219 08:23:03.938062 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:03 crc kubenswrapper[4780]: I0219 08:23:03.939608 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:23:03 crc kubenswrapper[4780]: E0219 08:23:03.939905 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-skpt9_openshift-ovn-kubernetes(6e649075-d5ae-4d3a-b0af-b8f7f7784035)\"" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" Feb 19 08:23:04 crc kubenswrapper[4780]: I0219 08:23:04.937470 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:04 crc kubenswrapper[4780]: I0219 08:23:04.937557 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:04 crc kubenswrapper[4780]: I0219 08:23:04.937625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:04 crc kubenswrapper[4780]: E0219 08:23:04.937795 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:04 crc kubenswrapper[4780]: E0219 08:23:04.937966 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:04 crc kubenswrapper[4780]: E0219 08:23:04.938108 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:05 crc kubenswrapper[4780]: I0219 08:23:05.937936 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:05 crc kubenswrapper[4780]: E0219 08:23:05.938265 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:06 crc kubenswrapper[4780]: I0219 08:23:06.937091 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:06 crc kubenswrapper[4780]: I0219 08:23:06.937192 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:06 crc kubenswrapper[4780]: I0219 08:23:06.937192 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:06 crc kubenswrapper[4780]: E0219 08:23:06.937325 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:06 crc kubenswrapper[4780]: E0219 08:23:06.937542 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:06 crc kubenswrapper[4780]: E0219 08:23:06.937795 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:07 crc kubenswrapper[4780]: I0219 08:23:07.937953 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:07 crc kubenswrapper[4780]: E0219 08:23:07.940316 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:08 crc kubenswrapper[4780]: E0219 08:23:08.087443 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:23:08 crc kubenswrapper[4780]: I0219 08:23:08.938200 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:08 crc kubenswrapper[4780]: I0219 08:23:08.938455 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:08 crc kubenswrapper[4780]: E0219 08:23:08.938482 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:08 crc kubenswrapper[4780]: E0219 08:23:08.938680 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:08 crc kubenswrapper[4780]: I0219 08:23:08.939026 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:08 crc kubenswrapper[4780]: E0219 08:23:08.939178 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:09 crc kubenswrapper[4780]: I0219 08:23:09.938290 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:09 crc kubenswrapper[4780]: E0219 08:23:09.938537 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:10 crc kubenswrapper[4780]: I0219 08:23:10.937483 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:10 crc kubenswrapper[4780]: I0219 08:23:10.937608 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:10 crc kubenswrapper[4780]: I0219 08:23:10.937499 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:10 crc kubenswrapper[4780]: E0219 08:23:10.937702 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:10 crc kubenswrapper[4780]: E0219 08:23:10.938066 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:10 crc kubenswrapper[4780]: E0219 08:23:10.938302 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:11 crc kubenswrapper[4780]: I0219 08:23:11.938330 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:11 crc kubenswrapper[4780]: E0219 08:23:11.938560 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:12 crc kubenswrapper[4780]: I0219 08:23:12.937447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:12 crc kubenswrapper[4780]: I0219 08:23:12.937447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:12 crc kubenswrapper[4780]: I0219 08:23:12.937447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:12 crc kubenswrapper[4780]: E0219 08:23:12.937739 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:12 crc kubenswrapper[4780]: E0219 08:23:12.937908 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:12 crc kubenswrapper[4780]: E0219 08:23:12.938033 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:13 crc kubenswrapper[4780]: E0219 08:23:13.088844 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:23:13 crc kubenswrapper[4780]: I0219 08:23:13.937716 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:13 crc kubenswrapper[4780]: E0219 08:23:13.938093 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:13 crc kubenswrapper[4780]: I0219 08:23:13.938994 4780 scope.go:117] "RemoveContainer" containerID="f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3" Feb 19 08:23:14 crc kubenswrapper[4780]: I0219 08:23:14.803722 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/1.log" Feb 19 08:23:14 crc kubenswrapper[4780]: I0219 08:23:14.803821 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerStarted","Data":"58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac"} Feb 19 08:23:14 crc kubenswrapper[4780]: I0219 08:23:14.937708 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:14 crc kubenswrapper[4780]: I0219 08:23:14.937754 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:14 crc kubenswrapper[4780]: I0219 08:23:14.937809 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:14 crc kubenswrapper[4780]: E0219 08:23:14.937968 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:14 crc kubenswrapper[4780]: E0219 08:23:14.938220 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:14 crc kubenswrapper[4780]: E0219 08:23:14.938677 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:15 crc kubenswrapper[4780]: I0219 08:23:15.938298 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:15 crc kubenswrapper[4780]: E0219 08:23:15.938717 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:16 crc kubenswrapper[4780]: I0219 08:23:16.937331 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:16 crc kubenswrapper[4780]: I0219 08:23:16.937363 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:16 crc kubenswrapper[4780]: E0219 08:23:16.937618 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:16 crc kubenswrapper[4780]: E0219 08:23:16.937736 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:16 crc kubenswrapper[4780]: I0219 08:23:16.938264 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:16 crc kubenswrapper[4780]: E0219 08:23:16.938670 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:17 crc kubenswrapper[4780]: I0219 08:23:17.937472 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:17 crc kubenswrapper[4780]: E0219 08:23:17.939654 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:17 crc kubenswrapper[4780]: I0219 08:23:17.941212 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:23:18 crc kubenswrapper[4780]: E0219 08:23:18.089547 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.825681 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/3.log" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.829876 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerStarted","Data":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.830545 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.860959 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jg765"] Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.861207 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:18 crc kubenswrapper[4780]: E0219 08:23:18.861425 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.937858 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.937891 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:18 crc kubenswrapper[4780]: I0219 08:23:18.937916 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:18 crc kubenswrapper[4780]: E0219 08:23:18.938037 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:18 crc kubenswrapper[4780]: E0219 08:23:18.938405 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:18 crc kubenswrapper[4780]: E0219 08:23:18.938606 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:19 crc kubenswrapper[4780]: I0219 08:23:19.937876 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:19 crc kubenswrapper[4780]: E0219 08:23:19.938175 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:20 crc kubenswrapper[4780]: I0219 08:23:20.937778 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:20 crc kubenswrapper[4780]: I0219 08:23:20.937895 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:20 crc kubenswrapper[4780]: I0219 08:23:20.937806 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:20 crc kubenswrapper[4780]: E0219 08:23:20.938108 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:20 crc kubenswrapper[4780]: E0219 08:23:20.938263 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:20 crc kubenswrapper[4780]: E0219 08:23:20.938020 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:21 crc kubenswrapper[4780]: I0219 08:23:21.937809 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:21 crc kubenswrapper[4780]: E0219 08:23:21.938350 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jg765" podUID="d1002d5b-b8b1-4175-9e36-9fbea7a1c060" Feb 19 08:23:22 crc kubenswrapper[4780]: I0219 08:23:22.937283 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:22 crc kubenswrapper[4780]: I0219 08:23:22.937358 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:22 crc kubenswrapper[4780]: I0219 08:23:22.937380 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:22 crc kubenswrapper[4780]: E0219 08:23:22.937473 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:23:22 crc kubenswrapper[4780]: E0219 08:23:22.937752 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:23:22 crc kubenswrapper[4780]: E0219 08:23:22.937671 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:23:23 crc kubenswrapper[4780]: I0219 08:23:23.938195 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:23 crc kubenswrapper[4780]: I0219 08:23:23.941961 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 08:23:23 crc kubenswrapper[4780]: I0219 08:23:23.942089 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.937531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.937612 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.937666 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.941252 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.941287 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.942279 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 08:23:24 crc kubenswrapper[4780]: I0219 08:23:24.942430 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.934877 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:26 crc kubenswrapper[4780]: E0219 08:23:26.935220 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:25:28.935174889 +0000 UTC m=+271.678832378 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.935306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.935411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.935552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.937325 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.946455 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:26 crc kubenswrapper[4780]: I0219 08:23:26.949297 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.036594 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.042438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.059805 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.079615 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.088766 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.364867 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podStartSLOduration=122.364845294 podStartE2EDuration="2m2.364845294s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:18.88734318 +0000 UTC m=+141.631000729" watchObservedRunningTime="2026-02-19 08:23:27.364845294 +0000 UTC m=+150.108502743" Feb 19 08:23:27 crc kubenswrapper[4780]: W0219 08:23:27.548026 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cfe3de71b77c8ea3152627ee805fddb35636c875b23569192a15f08ed0615d1d WatchSource:0}: Error finding container cfe3de71b77c8ea3152627ee805fddb35636c875b23569192a15f08ed0615d1d: Status 404 returned error can't find the container with id cfe3de71b77c8ea3152627ee805fddb35636c875b23569192a15f08ed0615d1d Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.878185 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e6bca08e27d607c343019b4eff8b0dc59b8158e361da99a14cd3e1c0d6e09e00"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.878645 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bb4947a65087220943465abb6d24a76382c764d8b9056afca758232ef20478aa"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.882710 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.883939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c9f6b42181e430d8571530bbe4bdf72b63a3e75ff86b7c7c6eb015e5fccefd4c"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.884038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8c7a6b1f577c4111bab9d7a7c739c075fc8e4c2af852e02b979fad3ee59fcd07"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.884813 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.887958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"20167048dc26a8289e5c6a26b0a28781929cd3fc1c0e739401e9cfd9aa9a01de"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.888296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cfe3de71b77c8ea3152627ee805fddb35636c875b23569192a15f08ed0615d1d"} Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.948857 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9s5t"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.960384 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xt4l6"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.960723 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.962771 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.964471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.966148 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.966460 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.980467 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.982761 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983460 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983660 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983485 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983834 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983533 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.983544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984160 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984258 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-p664d"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984282 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984317 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984560 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984691 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.986325 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.986796 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984834 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.986414 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.993191 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984774 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984878 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.988189 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22"] Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.994014 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.994229 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.994376 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.994528 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.994685 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.984915 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.985068 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.995209 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 08:23:27 crc kubenswrapper[4780]: I0219 08:23:27.999722 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.000321 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.000771 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.000846 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.000826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.003215 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6jldl"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.004036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.004299 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.010842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.011976 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.012608 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q9p69"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.012995 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.013442 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.013911 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.014527 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.016063 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ltjh9"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.016852 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.017084 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.017319 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.017444 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.017684 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.018214 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.018255 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.024290 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.024469 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.024575 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.024752 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.024830 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.025029 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.025224 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.027507 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.027721 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.027875 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.027996 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028221 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028346 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028459 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028564 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028661 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.027923 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.028920 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.029021 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.029214 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.044095 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.045640 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.029212 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.052249 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.054411 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.057302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.063891 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.064019 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.067609 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.067871 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.069279 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-g6dcx"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.067869 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.069608 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068233 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.070031 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068328 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.069613 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068356 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068427 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068409 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.070303 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068514 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068620 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.070498 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068745 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068969 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.069157 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.068868 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.070785 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.070777 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.090032 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.090632 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.090791 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.090867 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.092264 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.092715 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.092830 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094319 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-config\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-service-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094359 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhdl\" (UniqueName: \"kubernetes.io/projected/ad7d9950-0edb-4999-86d0-269be581a7f7-kube-api-access-8zhdl\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094376 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-config\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094423 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkz9x\" (UniqueName: \"kubernetes.io/projected/922ad30f-fa09-4003-8bcb-7389df141727-kube-api-access-rkz9x\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad7d9950-0edb-4999-86d0-269be581a7f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a04d00b6-08f3-4210-9251-83466c020e6c-machine-approver-tls\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922ad30f-fa09-4003-8bcb-7389df141727-serving-cert\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlz5\" (UniqueName: \"kubernetes.io/projected/efdec686-8e3a-4566-b46b-a2d6f4c48648-kube-api-access-rrlz5\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094532 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efdec686-8e3a-4566-b46b-a2d6f4c48648-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094550 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-auth-proxy-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-images\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094584 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/efdec686-8e3a-4566-b46b-a2d6f4c48648-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.094637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qffkn\" (UniqueName: \"kubernetes.io/projected/a04d00b6-08f3-4210-9251-83466c020e6c-kube-api-access-qffkn\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.097357 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.097432 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.097858 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.097952 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.098190 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.098683 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.098337 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.098421 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.099141 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.098644 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.104631 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.105931 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.106073 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.106209 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.106327 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.106502 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.106684 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.107038 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.107612 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.107812 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.114382 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.115848 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.116066 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.116440 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.117224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.118904 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrrt4"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.120936 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.123566 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.125739 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.127901 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6q46"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.127945 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.128472 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.128914 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.131406 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.132662 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.134246 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.136231 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.137074 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.137558 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.138303 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.138982 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.143431 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.144182 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.144264 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.144696 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.159328 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.159359 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.162147 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tfpvq"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.162990 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.165062 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.166822 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.167768 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.170489 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.173101 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.176652 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.176811 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.177651 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.177973 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.180478 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.181800 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.181989 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.186356 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.187454 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.188270 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.191166 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jsm4b"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.192422 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.193904 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8hmxg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.195048 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9s5t"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.195218 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.195496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-config\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.195551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.195617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhdl\" (UniqueName: \"kubernetes.io/projected/ad7d9950-0edb-4999-86d0-269be581a7f7-kube-api-access-8zhdl\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196219 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196307 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-service-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196308 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196327 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkz9x\" (UniqueName: \"kubernetes.io/projected/922ad30f-fa09-4003-8bcb-7389df141727-kube-api-access-rkz9x\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196355 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196373 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljz8m\" (UniqueName: \"kubernetes.io/projected/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-kube-api-access-ljz8m\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196399 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-config\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgwp\" (UniqueName: \"kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196475 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196519 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad7d9950-0edb-4999-86d0-269be581a7f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196556 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a04d00b6-08f3-4210-9251-83466c020e6c-machine-approver-tls\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196577 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798e5340-3b97-433e-a55c-88ef57c7b761-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196599 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922ad30f-fa09-4003-8bcb-7389df141727-serving-cert\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196618 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlz5\" (UniqueName: \"kubernetes.io/projected/efdec686-8e3a-4566-b46b-a2d6f4c48648-kube-api-access-rrlz5\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7cl\" (UniqueName: \"kubernetes.io/projected/515500df-562c-4659-a3ae-12efcd533619-kube-api-access-zh7cl\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196681 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196698 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196724 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efdec686-8e3a-4566-b46b-a2d6f4c48648-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-auth-proxy-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196786 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196810 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-images\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/efdec686-8e3a-4566-b46b-a2d6f4c48648-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196874 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75601b03-beff-4f9d-b191-56519a5c73a4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.196948 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197222 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwj5x\" (UniqueName: \"kubernetes.io/projected/8431537e-f827-40a6-8be5-836d4b203c22-kube-api-access-pwj5x\") pod \"downloads-7954f5f757-p664d\" (UID: \"8431537e-f827-40a6-8be5-836d4b203c22\") " pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197247 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798e5340-3b97-433e-a55c-88ef57c7b761-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqm6\" (UniqueName: \"kubernetes.io/projected/75601b03-beff-4f9d-b191-56519a5c73a4-kube-api-access-xhqm6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197290 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197445 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/059b8dda-8090-4773-b541-544c7ae97dc7-metrics-tls\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.197481 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-config\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198314 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198374 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198394 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198420 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75601b03-beff-4f9d-b191-56519a5c73a4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198467 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-serving-cert\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198486 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nkq\" (UniqueName: \"kubernetes.io/projected/798e5340-3b97-433e-a55c-88ef57c7b761-kube-api-access-k4nkq\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ad7d9950-0edb-4999-86d0-269be581a7f7-images\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.198949 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.199293 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a04d00b6-08f3-4210-9251-83466c020e6c-auth-proxy-config\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.199886 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/efdec686-8e3a-4566-b46b-a2d6f4c48648-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.200703 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-etcd-client\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201144 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qffkn\" (UniqueName: \"kubernetes.io/projected/a04d00b6-08f3-4210-9251-83466c020e6c-kube-api-access-qffkn\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201242 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201265 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66xg\" (UniqueName: \"kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201309 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201334 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201354 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svq4c\" (UniqueName: \"kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201855 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.201989 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-config\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.202015 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-service-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.202035 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvbs\" (UniqueName: \"kubernetes.io/projected/059b8dda-8090-4773-b541-544c7ae97dc7-kube-api-access-qgvbs\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.202068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.202731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-config\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.202738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-service-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.203106 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.203251 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922ad30f-fa09-4003-8bcb-7389df141727-serving-cert\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.203301 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a04d00b6-08f3-4210-9251-83466c020e6c-machine-approver-tls\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.203489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efdec686-8e3a-4566-b46b-a2d6f4c48648-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.204065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922ad30f-fa09-4003-8bcb-7389df141727-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.205415 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad7d9950-0edb-4999-86d0-269be581a7f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.211964 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.213353 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p664d"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.214801 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6jldl"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.216267 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.218513 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6q46"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.219465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.220782 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xt4l6"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.221214 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.222191 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8hmxg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.223538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.225113 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.227021 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.228286 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.229662 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.231071 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tfpvq"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.233562 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q9p69"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.234858 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.236157 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.237336 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.238610 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.241563 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.241932 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.243066 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.245019 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrrt4"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.246554 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.249116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.250962 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.252088 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pxg7d"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.253078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.253470 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.254666 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.255792 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ltjh9"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.256872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.258334 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4frqf"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.260824 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxg7d"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.260932 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.262151 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.262428 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jsm4b"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.263756 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.264872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.282337 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.282617 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.284596 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4frqf"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.285680 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8h7ch"] Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.286422 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302056 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljz8m\" (UniqueName: \"kubernetes.io/projected/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-kube-api-access-ljz8m\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgwp\" (UniqueName: \"kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302879 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302954 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798e5340-3b97-433e-a55c-88ef57c7b761-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.302980 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7cl\" (UniqueName: \"kubernetes.io/projected/515500df-562c-4659-a3ae-12efcd533619-kube-api-access-zh7cl\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303076 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75601b03-beff-4f9d-b191-56519a5c73a4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303293 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwj5x\" (UniqueName: \"kubernetes.io/projected/8431537e-f827-40a6-8be5-836d4b203c22-kube-api-access-pwj5x\") pod \"downloads-7954f5f757-p664d\" (UID: \"8431537e-f827-40a6-8be5-836d4b203c22\") " pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303346 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798e5340-3b97-433e-a55c-88ef57c7b761-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303395 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqm6\" (UniqueName: \"kubernetes.io/projected/75601b03-beff-4f9d-b191-56519a5c73a4-kube-api-access-xhqm6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303418 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303443 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/059b8dda-8090-4773-b541-544c7ae97dc7-metrics-tls\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303493 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303543 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303571 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303598 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75601b03-beff-4f9d-b191-56519a5c73a4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-serving-cert\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nkq\" (UniqueName: \"kubernetes.io/projected/798e5340-3b97-433e-a55c-88ef57c7b761-kube-api-access-k4nkq\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303670 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-etcd-client\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303705 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303734 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303791 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66xg\" (UniqueName: \"kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303817 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303844 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303928 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svq4c\" (UniqueName: \"kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303954 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvbs\" (UniqueName: \"kubernetes.io/projected/059b8dda-8090-4773-b541-544c7ae97dc7-kube-api-access-qgvbs\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798e5340-3b97-433e-a55c-88ef57c7b761-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.303986 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-config\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304066 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304059 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304088 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304114 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-service-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.304833 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-config\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.305170 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.305501 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.306000 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.306811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.307037 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.307289 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.307795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.307867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75601b03-beff-4f9d-b191-56519a5c73a4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.307101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/515500df-562c-4659-a3ae-12efcd533619-etcd-service-ca\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.308221 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.308355 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.308657 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.309162 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.309175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.309700 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.309997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.310374 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.311677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-serving-cert\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.311887 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.312340 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/515500df-562c-4659-a3ae-12efcd533619-etcd-client\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.313091 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75601b03-beff-4f9d-b191-56519a5c73a4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.313096 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.313226 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.313299 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.314186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.314751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.314883 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798e5340-3b97-433e-a55c-88ef57c7b761-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.315530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.315719 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/059b8dda-8090-4773-b541-544c7ae97dc7-metrics-tls\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.316107 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.329004 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.337490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.342588 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.362459 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.381773 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.393778 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.442638 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.463881 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.482276 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.511697 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.522891 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.542833 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.561953 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.582321 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.601292 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.623190 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.642086 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.662410 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.693418 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.703882 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.724167 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.743192 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.762413 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.782782 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.804347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.833423 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.842187 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.861856 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.882243 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.903280 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.923339 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.942585 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.961473 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 08:23:28 crc kubenswrapper[4780]: I0219 08:23:28.982043 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.002185 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.021845 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.043257 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.061775 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.083693 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.101723 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.123544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.143390 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.160763 4780 request.go:700] Waited for 1.015824456s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.163544 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.183284 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.203866 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.224655 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.242256 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.262887 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.290541 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.302893 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.323406 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.343340 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.363003 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.382874 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.403267 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.422627 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.442370 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.463114 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.483485 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.502886 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.523001 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.543836 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.562777 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.583057 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.603895 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.623305 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.643180 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.663278 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.682042 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.702719 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.722205 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.743036 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.772054 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.782396 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.803419 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.823305 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.842093 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.880666 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhdl\" (UniqueName: \"kubernetes.io/projected/ad7d9950-0edb-4999-86d0-269be581a7f7-kube-api-access-8zhdl\") pod \"machine-api-operator-5694c8668f-b9s5t\" (UID: \"ad7d9950-0edb-4999-86d0-269be581a7f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.906175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkz9x\" (UniqueName: \"kubernetes.io/projected/922ad30f-fa09-4003-8bcb-7389df141727-kube-api-access-rkz9x\") pod \"authentication-operator-69f744f599-xt4l6\" (UID: \"922ad30f-fa09-4003-8bcb-7389df141727\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.922916 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlz5\" (UniqueName: \"kubernetes.io/projected/efdec686-8e3a-4566-b46b-a2d6f4c48648-kube-api-access-rrlz5\") pod \"openshift-config-operator-7777fb866f-hsvc2\" (UID: \"efdec686-8e3a-4566-b46b-a2d6f4c48648\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.943173 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.952603 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qffkn\" (UniqueName: \"kubernetes.io/projected/a04d00b6-08f3-4210-9251-83466c020e6c-kube-api-access-qffkn\") pod \"machine-approver-56656f9798-s45d4\" (UID: \"a04d00b6-08f3-4210-9251-83466c020e6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.963261 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 08:23:29 crc kubenswrapper[4780]: I0219 08:23:29.982565 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.003565 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.023180 4780 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.042960 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.065198 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.083226 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.100066 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.102750 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.121862 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.123314 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 08:23:30 crc kubenswrapper[4780]: W0219 08:23:30.138138 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda04d00b6_08f3_4210_9251_83466c020e6c.slice/crio-5943fa44df63f70cd2d0084f21cff5211390ff269d14e98f1322b09f51240948 WatchSource:0}: Error finding container 5943fa44df63f70cd2d0084f21cff5211390ff269d14e98f1322b09f51240948: Status 404 returned error can't find the container with id 5943fa44df63f70cd2d0084f21cff5211390ff269d14e98f1322b09f51240948 Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.157093 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.166012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljz8m\" (UniqueName: \"kubernetes.io/projected/7a37b870-8cc6-418e-ae1b-c8b84a9ca356-kube-api-access-ljz8m\") pod \"cluster-samples-operator-665b6dd947-kq8m2\" (UID: \"7a37b870-8cc6-418e-ae1b-c8b84a9ca356\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.180413 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.180948 4780 request.go:700] Waited for 1.877779669s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.184971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgwp\" (UniqueName: \"kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp\") pod \"controller-manager-879f6c89f-kswgq\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.205775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7cl\" (UniqueName: \"kubernetes.io/projected/515500df-562c-4659-a3ae-12efcd533619-kube-api-access-zh7cl\") pod \"etcd-operator-b45778765-ltjh9\" (UID: \"515500df-562c-4659-a3ae-12efcd533619\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.230010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-n7vfp\" (UID: \"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.236142 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.249039 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66xg\" (UniqueName: \"kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg\") pod \"console-f9d7485db-mzcjh\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.262189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nkq\" (UniqueName: \"kubernetes.io/projected/798e5340-3b97-433e-a55c-88ef57c7b761-kube-api-access-k4nkq\") pod \"openshift-apiserver-operator-796bbdcf4f-p7fp6\" (UID: \"798e5340-3b97-433e-a55c-88ef57c7b761\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.283171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvbs\" (UniqueName: \"kubernetes.io/projected/059b8dda-8090-4773-b541-544c7ae97dc7-kube-api-access-qgvbs\") pod \"dns-operator-744455d44c-6jldl\" (UID: \"059b8dda-8090-4773-b541-544c7ae97dc7\") " pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.294492 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.301982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svq4c\" (UniqueName: \"kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c\") pod \"oauth-openshift-558db77b4-q9p69\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.316773 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9s5t"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.317951 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwj5x\" (UniqueName: \"kubernetes.io/projected/8431537e-f827-40a6-8be5-836d4b203c22-kube-api-access-pwj5x\") pod \"downloads-7954f5f757-p664d\" (UID: \"8431537e-f827-40a6-8be5-836d4b203c22\") " pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.320537 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" Feb 19 08:23:30 crc kubenswrapper[4780]: W0219 08:23:30.326814 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad7d9950_0edb_4999_86d0_269be581a7f7.slice/crio-522e4b920167c7075721472373602e00f1f16346d0065cae809eb582de3fcb0f WatchSource:0}: Error finding container 522e4b920167c7075721472373602e00f1f16346d0065cae809eb582de3fcb0f: Status 404 returned error can't find the container with id 522e4b920167c7075721472373602e00f1f16346d0065cae809eb582de3fcb0f Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.335417 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.337359 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqm6\" (UniqueName: \"kubernetes.io/projected/75601b03-beff-4f9d-b191-56519a5c73a4-kube-api-access-xhqm6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fdh22\" (UID: \"75601b03-beff-4f9d-b191-56519a5c73a4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.354483 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.363205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.382731 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.403191 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.425215 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xt4l6"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.436839 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqwk\" (UniqueName: \"kubernetes.io/projected/b57f85fe-b6a8-4e85-902e-bc8227fac331-kube-api-access-bmqwk\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.436884 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmmfr\" (UniqueName: \"kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-client\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437046 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-dir\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437092 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmb2s\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-kube-api-access-fmb2s\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437164 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437485 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437513 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-policies\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437581 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437692 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d487a323-7532-4d54-9a3d-0cab7876247f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437733 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55knz\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437794 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57f85fe-b6a8-4e85-902e-bc8227fac331-service-ca-bundle\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437871 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d487a323-7532-4d54-9a3d-0cab7876247f-config\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.437909 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.438116 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-stats-auth\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.438258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.438284 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-encryption-config\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.438306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.439421 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:30.939340338 +0000 UTC m=+153.682997787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.439817 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.440188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.440223 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.440540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-default-certificate\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.440595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441190 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxbd\" (UniqueName: \"kubernetes.io/projected/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-kube-api-access-hvxbd\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-metrics-certs\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441310 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d487a323-7532-4d54-9a3d-0cab7876247f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-serving-cert\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.441509 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: W0219 08:23:30.446169 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod922ad30f_fa09_4003_8bcb_7389df141727.slice/crio-b8473a85c11aca03cfc101be8d57f3bf9dd6a509c9d3e4c3da0f384f4a9a50ea WatchSource:0}: Error finding container b8473a85c11aca03cfc101be8d57f3bf9dd6a509c9d3e4c3da0f384f4a9a50ea: Status 404 returned error can't find the container with id b8473a85c11aca03cfc101be8d57f3bf9dd6a509c9d3e4c3da0f384f4a9a50ea Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.469476 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.547677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.547859 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-dir\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.547884 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmb2s\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-kube-api-access-fmb2s\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.547977 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-srv-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.547996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-certs\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrwt\" (UniqueName: \"kubernetes.io/projected/42b42bbb-ed14-474c-b81d-9fa63e652886-kube-api-access-hdrwt\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d487a323-7532-4d54-9a3d-0cab7876247f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cprlw\" (UniqueName: \"kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bw88\" (UniqueName: \"kubernetes.io/projected/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-kube-api-access-9bw88\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57f85fe-b6a8-4e85-902e-bc8227fac331-service-ca-bundle\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548283 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvlx\" (UniqueName: \"kubernetes.io/projected/0a9ee3ae-5c55-4682-af7d-4ff31566df16-kube-api-access-swvlx\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.548318 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.048290914 +0000 UTC m=+153.791948363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548393 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548427 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d487a323-7532-4d54-9a3d-0cab7876247f-config\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548451 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-serving-cert\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548546 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31ad06-7520-433e-b137-b8e4a3fcd686-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548667 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62c186f-40e9-48a9-9af3-2d44d3aa867f-serving-cert\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-srv-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-image-import-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548797 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs9s\" (UniqueName: \"kubernetes.io/projected/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-kube-api-access-mbs9s\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548831 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548851 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548870 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-apiservice-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548889 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t7x\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-kube-api-access-p7t7x\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548905 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwh7h\" (UniqueName: \"kubernetes.io/projected/edc65727-b71e-48c1-bdf4-a72d483b1ca5-kube-api-access-qwh7h\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-proxy-tls\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548940 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnsml\" (UniqueName: \"kubernetes.io/projected/33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1-kube-api-access-hnsml\") pod \"migrator-59844c95c7-xlq4r\" (UID: \"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.548977 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-registration-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg748\" (UniqueName: \"kubernetes.io/projected/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-kube-api-access-rg748\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549025 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzhq\" (UniqueName: \"kubernetes.io/projected/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-kube-api-access-mlzhq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549056 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-metrics-tls\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549105 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpdr\" (UniqueName: \"kubernetes.io/projected/cf863c82-61de-4465-acee-65c52424e261-kube-api-access-dlpdr\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-metrics-certs\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d487a323-7532-4d54-9a3d-0cab7876247f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-serving-cert\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549191 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0778220-8d41-435d-9685-9394fd991915-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549238 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/08448379-9a6b-4daa-8f71-c4087e4c4553-tmpfs\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549268 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-encryption-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549284 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit-dir\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqwk\" (UniqueName: \"kubernetes.io/projected/b57f85fe-b6a8-4e85-902e-bc8227fac331-kube-api-access-bmqwk\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549335 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-client\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549351 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfm8\" (UniqueName: \"kubernetes.io/projected/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-kube-api-access-cmfm8\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549381 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmmfr\" (UniqueName: \"kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549400 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-client\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.549411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-dir\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553703 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57f85fe-b6a8-4e85-902e-bc8227fac331-service-ca-bundle\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553763 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553832 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a379ce4-1016-48f8-beb7-c20b3c014839-config\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31ad06-7520-433e-b137-b8e4a3fcd686-proxy-tls\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553887 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553951 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf863c82-61de-4465-acee-65c52424e261-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.553984 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-plugins-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554006 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0778220-8d41-435d-9685-9394fd991915-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554030 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-webhook-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554088 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-csi-data-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554296 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554330 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-policies\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554356 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.554459 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-images\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.555425 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.559154 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.560327 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d487a323-7532-4d54-9a3d-0cab7876247f-config\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.562697 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-audit-policies\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.563842 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.063822094 +0000 UTC m=+153.807479543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.563927 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.564240 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.565113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.565821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567044 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567150 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567243 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-key\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567434 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-config\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567566 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-config\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567802 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55knz\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.567834 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfqp2\" (UniqueName: \"kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.568202 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.568237 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-profile-collector-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.569658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.569797 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-stats-auth\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.570406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.570855 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-serving-cert\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.572047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a379ce4-1016-48f8-beb7-c20b3c014839-serving-cert\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.572620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.573029 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.573060 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-encryption-config\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.573766 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ntw\" (UniqueName: \"kubernetes.io/projected/d62c186f-40e9-48a9-9af3-2d44d3aa867f-kube-api-access-k9ntw\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.573802 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.578904 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.579486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-metrics-certs\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580178 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqf4\" (UniqueName: \"kubernetes.io/projected/7a379ce4-1016-48f8-beb7-c20b3c014839-kube-api-access-dvqf4\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580271 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580272 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580296 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580334 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-etcd-client\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580384 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580420 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-config-volume\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4h5\" (UniqueName: \"kubernetes.io/projected/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-kube-api-access-ln4h5\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.580490 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581020 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmnh\" (UniqueName: \"kubernetes.io/projected/f0778220-8d41-435d-9685-9394fd991915-kube-api-access-8tmnh\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-node-bootstrap-token\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581200 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-default-certificate\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-trusted-ca\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lzh\" (UniqueName: \"kubernetes.io/projected/ec31ad06-7520-433e-b137-b8e4a3fcd686-kube-api-access-m7lzh\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjp6\" (UniqueName: \"kubernetes.io/projected/c3e7e898-b119-4a31-8073-99ab35588548-kube-api-access-fvjp6\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581583 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581618 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf79t\" (UniqueName: \"kubernetes.io/projected/17d08c15-3d45-49f2-85e2-d21567e5e5c3-kube-api-access-wf79t\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-mountpoint-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d487a323-7532-4d54-9a3d-0cab7876247f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.581766 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-cabundle\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.582662 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxbd\" (UniqueName: \"kubernetes.io/projected/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-kube-api-access-hvxbd\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.582712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-socket-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.582762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.582829 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hlq\" (UniqueName: \"kubernetes.io/projected/08448379-9a6b-4daa-8f71-c4087e4c4553-kube-api-access-q5hlq\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.586674 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.587258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.592237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594410 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d08c15-3d45-49f2-85e2-d21567e5e5c3-cert\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594501 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-node-pullsecrets\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594536 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9ee3ae-5c55-4682-af7d-4ff31566df16-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594564 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594648 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.594797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.595732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-encryption-config\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.596711 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.603629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d487a323-7532-4d54-9a3d-0cab7876247f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljdwz\" (UID: \"d487a323-7532-4d54-9a3d-0cab7876247f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.605394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-stats-auth\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.614803 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b57f85fe-b6a8-4e85-902e-bc8227fac331-default-certificate\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.628328 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.689420 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmb2s\" (UniqueName: \"kubernetes.io/projected/dfdb9fee-9a89-4659-b7d7-6ce459654d9f-kube-api-access-fmb2s\") pod \"cluster-image-registry-operator-dc59b4c8b-972bd\" (UID: \"dfdb9fee-9a89-4659-b7d7-6ce459654d9f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62c186f-40e9-48a9-9af3-2d44d3aa867f-serving-cert\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-srv-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696593 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-image-import-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696610 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696628 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbs9s\" (UniqueName: \"kubernetes.io/projected/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-kube-api-access-mbs9s\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696648 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696669 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-apiservice-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696684 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t7x\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-kube-api-access-p7t7x\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696706 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh7h\" (UniqueName: \"kubernetes.io/projected/edc65727-b71e-48c1-bdf4-a72d483b1ca5-kube-api-access-qwh7h\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-proxy-tls\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696740 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnsml\" (UniqueName: \"kubernetes.io/projected/33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1-kube-api-access-hnsml\") pod \"migrator-59844c95c7-xlq4r\" (UID: \"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696759 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-registration-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696780 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg748\" (UniqueName: \"kubernetes.io/projected/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-kube-api-access-rg748\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696797 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzhq\" (UniqueName: \"kubernetes.io/projected/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-kube-api-access-mlzhq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696819 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-metrics-tls\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpdr\" (UniqueName: \"kubernetes.io/projected/cf863c82-61de-4465-acee-65c52424e261-kube-api-access-dlpdr\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0778220-8d41-435d-9685-9394fd991915-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696883 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/08448379-9a6b-4daa-8f71-c4087e4c4553-tmpfs\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696900 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit-dir\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696920 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-encryption-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-client\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696956 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfm8\" (UniqueName: \"kubernetes.io/projected/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-kube-api-access-cmfm8\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.696981 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a379ce4-1016-48f8-beb7-c20b3c014839-config\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697037 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31ad06-7520-433e-b137-b8e4a3fcd686-proxy-tls\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf863c82-61de-4465-acee-65c52424e261-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-plugins-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697086 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-webhook-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697089 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697105 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0778220-8d41-435d-9685-9394fd991915-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697144 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-csi-data-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697173 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697187 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-images\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697205 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-key\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697235 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-config\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697252 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-config\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfqp2\" (UniqueName: \"kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-profile-collector-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697305 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a379ce4-1016-48f8-beb7-c20b3c014839-serving-cert\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ntw\" (UniqueName: \"kubernetes.io/projected/d62c186f-40e9-48a9-9af3-2d44d3aa867f-kube-api-access-k9ntw\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697341 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvqf4\" (UniqueName: \"kubernetes.io/projected/7a379ce4-1016-48f8-beb7-c20b3c014839-kube-api-access-dvqf4\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697360 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697392 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-config-volume\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4h5\" (UniqueName: \"kubernetes.io/projected/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-kube-api-access-ln4h5\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697430 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmnh\" (UniqueName: \"kubernetes.io/projected/f0778220-8d41-435d-9685-9394fd991915-kube-api-access-8tmnh\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697448 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-node-bootstrap-token\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-trusted-ca\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697482 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7lzh\" (UniqueName: \"kubernetes.io/projected/ec31ad06-7520-433e-b137-b8e4a3fcd686-kube-api-access-m7lzh\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697498 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjp6\" (UniqueName: \"kubernetes.io/projected/c3e7e898-b119-4a31-8073-99ab35588548-kube-api-access-fvjp6\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697515 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf79t\" (UniqueName: \"kubernetes.io/projected/17d08c15-3d45-49f2-85e2-d21567e5e5c3-kube-api-access-wf79t\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-mountpoint-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-cabundle\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697532 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-socket-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697672 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hlq\" (UniqueName: \"kubernetes.io/projected/08448379-9a6b-4daa-8f71-c4087e4c4553-kube-api-access-q5hlq\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d08c15-3d45-49f2-85e2-d21567e5e5c3-cert\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-node-pullsecrets\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697806 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697832 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697859 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9ee3ae-5c55-4682-af7d-4ff31566df16-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697887 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-srv-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697912 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697945 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-certs\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697974 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrwt\" (UniqueName: \"kubernetes.io/projected/42b42bbb-ed14-474c-b81d-9fa63e652886-kube-api-access-hdrwt\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698056 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cprlw\" (UniqueName: \"kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698080 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bw88\" (UniqueName: \"kubernetes.io/projected/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-kube-api-access-9bw88\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swvlx\" (UniqueName: \"kubernetes.io/projected/0a9ee3ae-5c55-4682-af7d-4ff31566df16-kube-api-access-swvlx\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698167 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-serving-cert\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698193 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31ad06-7520-433e-b137-b8e4a3fcd686-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.698448 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.699022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a379ce4-1016-48f8-beb7-c20b3c014839-config\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.699824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.700402 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.200380494 +0000 UTC m=+153.944037943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.700998 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-node-pullsecrets\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.702094 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0778220-8d41-435d-9685-9394fd991915-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.702335 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-registration-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.703196 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.703453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.704337 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.697964 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmmfr\" (UniqueName: \"kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr\") pod \"route-controller-manager-6576b87f9c-bqpdd\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.705553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.705709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-audit-dir\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.706351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/08448379-9a6b-4daa-8f71-c4087e4c4553-tmpfs\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.706573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-encryption-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.706650 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-mountpoint-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.706839 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-socket-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.704845 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a379ce4-1016-48f8-beb7-c20b3c014839-serving-cert\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.708303 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.709112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.709444 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-trusted-ca\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.709521 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-csi-data-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.709562 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.710542 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-config\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.711354 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-apiservice-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.711915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31ad06-7520-433e-b137-b8e4a3fcd686-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.712462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d08c15-3d45-49f2-85e2-d21567e5e5c3-cert\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.712510 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/edc65727-b71e-48c1-bdf4-a72d483b1ca5-plugins-dir\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.712817 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-image-import-ca\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.713795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-config\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0778220-8d41-435d-9685-9394fd991915-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-cabundle\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62c186f-40e9-48a9-9af3-2d44d3aa867f-config\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714562 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714571 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-config-volume\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714642 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqwk\" (UniqueName: \"kubernetes.io/projected/b57f85fe-b6a8-4e85-902e-bc8227fac331-kube-api-access-bmqwk\") pod \"router-default-5444994796-g6dcx\" (UID: \"b57f85fe-b6a8-4e85-902e-bc8227fac331\") " pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.714938 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-metrics-tls\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.715062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62c186f-40e9-48a9-9af3-2d44d3aa867f-serving-cert\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.715113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-etcd-client\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.715430 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9ee3ae-5c55-4682-af7d-4ff31566df16-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.715733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31ad06-7520-433e-b137-b8e4a3fcd686-proxy-tls\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.717994 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.718270 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-srv-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.718639 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-certs\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.719728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c3e7e898-b119-4a31-8073-99ab35588548-node-bootstrap-token\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.727469 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-images\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.727876 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-proxy-tls\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.728263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/08448379-9a6b-4daa-8f71-c4087e4c4553-webhook-cert\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.728278 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.728956 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.730007 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf863c82-61de-4465-acee-65c52424e261-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.730153 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-serving-cert\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.730840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55knz\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.731406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-srv-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.735609 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42b42bbb-ed14-474c-b81d-9fa63e652886-profile-collector-cert\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.741473 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-signing-key\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.757462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxbd\" (UniqueName: \"kubernetes.io/projected/a01be762-6c87-4e78-b2e9-0f2bb29af8cb-kube-api-access-hvxbd\") pod \"apiserver-7bbb656c7d-s6sx8\" (UID: \"a01be762-6c87-4e78-b2e9-0f2bb29af8cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.782453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hlq\" (UniqueName: \"kubernetes.io/projected/08448379-9a6b-4daa-8f71-c4087e4c4553-kube-api-access-q5hlq\") pod \"packageserver-d55dfcdfc-5t7th\" (UID: \"08448379-9a6b-4daa-8f71-c4087e4c4553\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.801117 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.802102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.802954 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.302936722 +0000 UTC m=+154.046594161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.813516 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnsml\" (UniqueName: \"kubernetes.io/projected/33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1-kube-api-access-hnsml\") pod \"migrator-59844c95c7-xlq4r\" (UID: \"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.832426 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrwt\" (UniqueName: \"kubernetes.io/projected/42b42bbb-ed14-474c-b81d-9fa63e652886-kube-api-access-hdrwt\") pod \"catalog-operator-68c6474976-sqsb4\" (UID: \"42b42bbb-ed14-474c-b81d-9fa63e652886\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.841252 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.846648 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.856310 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg748\" (UniqueName: \"kubernetes.io/projected/79d7a2e4-9a84-4f25-aa19-a07b107cfd4c-kube-api-access-rg748\") pod \"apiserver-76f77b778f-xrrt4\" (UID: \"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c\") " pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.862647 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.864518 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzhq\" (UniqueName: \"kubernetes.io/projected/d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7-kube-api-access-mlzhq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjf56\" (UID: \"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.889885 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpdr\" (UniqueName: \"kubernetes.io/projected/cf863c82-61de-4465-acee-65c52424e261-kube-api-access-dlpdr\") pod \"package-server-manager-789f6589d5-hbk8s\" (UID: \"cf863c82-61de-4465-acee-65c52424e261\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.899909 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.905867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:30 crc kubenswrapper[4780]: E0219 08:23:30.906404 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.406377638 +0000 UTC m=+154.150035077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.906436 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q9p69"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.909849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6jldl"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.920353 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.922682 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p664d"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.925818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ntw\" (UniqueName: \"kubernetes.io/projected/d62c186f-40e9-48a9-9af3-2d44d3aa867f-kube-api-access-k9ntw\") pod \"console-operator-58897d9998-j6q46\" (UID: \"d62c186f-40e9-48a9-9af3-2d44d3aa867f\") " pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.927837 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.932698 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" event={"ID":"922ad30f-fa09-4003-8bcb-7389df141727","Type":"ContainerStarted","Data":"9586c55c8972d9102dc3dbf1dd8057cf35803588d328ebdfa84b640f263919a2"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.932755 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" event={"ID":"922ad30f-fa09-4003-8bcb-7389df141727","Type":"ContainerStarted","Data":"b8473a85c11aca03cfc101be8d57f3bf9dd6a509c9d3e4c3da0f384f4a9a50ea"} Feb 19 08:23:30 crc kubenswrapper[4780]: W0219 08:23:30.941357 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66917461_2afb_4a36_83fe_4ff8a0be77f8.slice/crio-f21246dff2bc491e8802638462191f465e73ab19e9156a75758dabea1b6a39e9 WatchSource:0}: Error finding container f21246dff2bc491e8802638462191f465e73ab19e9156a75758dabea1b6a39e9: Status 404 returned error can't find the container with id f21246dff2bc491e8802638462191f465e73ab19e9156a75758dabea1b6a39e9 Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.945101 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.950525 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" event={"ID":"7a37b870-8cc6-418e-ae1b-c8b84a9ca356","Type":"ContainerStarted","Data":"b15dbc52d55e9c35fa9e6a4199d58709b44314725f0c2f4a8da927ca774d2868"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.951239 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf79t\" (UniqueName: \"kubernetes.io/projected/17d08c15-3d45-49f2-85e2-d21567e5e5c3-kube-api-access-wf79t\") pod \"ingress-canary-pxg7d\" (UID: \"17d08c15-3d45-49f2-85e2-d21567e5e5c3\") " pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.953796 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ltjh9"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.954971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" event={"ID":"8ce275f1-b63d-4597-8680-e96315dded0c","Type":"ContainerStarted","Data":"519972d9e3fa54b475739f4783efd2399bee735929f99fd4559f568cdff8e0d2"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.957468 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.963742 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7lzh\" (UniqueName: \"kubernetes.io/projected/ec31ad06-7520-433e-b137-b8e4a3fcd686-kube-api-access-m7lzh\") pod \"machine-config-controller-84d6567774-rhlxz\" (UID: \"ec31ad06-7520-433e-b137-b8e4a3fcd686\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.976629 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" event={"ID":"ad7d9950-0edb-4999-86d0-269be581a7f7","Type":"ContainerStarted","Data":"2cbfb4eed3bfebd65a23761c8d5d795cf03730ed4fc7e5f1b59aae06512bff4c"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.976705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" event={"ID":"ad7d9950-0edb-4999-86d0-269be581a7f7","Type":"ContainerStarted","Data":"9ad1c39f70857a40b65343220b98e94d7769d6f22b5ae9fba2b0eeb2bfcd872d"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.976724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" event={"ID":"ad7d9950-0edb-4999-86d0-269be581a7f7","Type":"ContainerStarted","Data":"522e4b920167c7075721472373602e00f1f16346d0065cae809eb582de3fcb0f"} Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.977763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxg7d" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.978556 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp"] Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.986538 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjp6\" (UniqueName: \"kubernetes.io/projected/c3e7e898-b119-4a31-8073-99ab35588548-kube-api-access-fvjp6\") pod \"machine-config-server-8h7ch\" (UID: \"c3e7e898-b119-4a31-8073-99ab35588548\") " pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:30 crc kubenswrapper[4780]: I0219 08:23:30.988666 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.001620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cprlw\" (UniqueName: \"kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw\") pod \"collect-profiles-29524815-wr5xg\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.001949 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.007262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.009199 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.509185655 +0000 UTC m=+154.252843104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.025933 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bw88\" (UniqueName: \"kubernetes.io/projected/b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae-kube-api-access-9bw88\") pod \"service-ca-9c57cc56f-jsm4b\" (UID: \"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae\") " pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.037604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swvlx\" (UniqueName: \"kubernetes.io/projected/0a9ee3ae-5c55-4682-af7d-4ff31566df16-kube-api-access-swvlx\") pod \"multus-admission-controller-857f4d67dd-tfpvq\" (UID: \"0a9ee3ae-5c55-4682-af7d-4ff31566df16\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.052930 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.054656 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.056734 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" event={"ID":"efdec686-8e3a-4566-b46b-a2d6f4c48648","Type":"ContainerStarted","Data":"4dfa07c97388ec669c21779cc020ffb51b8e1c962d9080adef66647592dc8c83"} Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.056775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" event={"ID":"efdec686-8e3a-4566-b46b-a2d6f4c48648","Type":"ContainerStarted","Data":"af0c0c83a30faa6b1d5b58a83abcab7e36b0e691f833d5fcb2f513bbf04ed7c1"} Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.064500 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.068325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" event={"ID":"a04d00b6-08f3-4210-9251-83466c020e6c","Type":"ContainerStarted","Data":"e4bbd4600ee58797817bed5c213eb9ee0a9a885bfe331b6707a58095727e18b2"} Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.068397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" event={"ID":"a04d00b6-08f3-4210-9251-83466c020e6c","Type":"ContainerStarted","Data":"5943fa44df63f70cd2d0084f21cff5211390ff269d14e98f1322b09f51240948"} Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.068884 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.068917 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfm8\" (UniqueName: \"kubernetes.io/projected/2199c38e-d39e-4bee-8ea9-4ab5672a0e36-kube-api-access-cmfm8\") pod \"machine-config-operator-74547568cd-wjp8x\" (UID: \"2199c38e-d39e-4bee-8ea9-4ab5672a0e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:31 crc kubenswrapper[4780]: W0219 08:23:31.070873 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff6bb32_c2a1_4e4a_bfa0_f4193a90832f.slice/crio-ec993194ff827a5da9d31ea2e603906c938d46c3b4a05e24bf4ac00887fb9329 WatchSource:0}: Error finding container ec993194ff827a5da9d31ea2e603906c938d46c3b4a05e24bf4ac00887fb9329: Status 404 returned error can't find the container with id ec993194ff827a5da9d31ea2e603906c938d46c3b4a05e24bf4ac00887fb9329 Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.085225 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.089669 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvqf4\" (UniqueName: \"kubernetes.io/projected/7a379ce4-1016-48f8-beb7-c20b3c014839-kube-api-access-dvqf4\") pod \"service-ca-operator-777779d784-4cwqg\" (UID: \"7a379ce4-1016-48f8-beb7-c20b3c014839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.101467 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" Feb 19 08:23:31 crc kubenswrapper[4780]: W0219 08:23:31.110465 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28c08ad8_6d6d_4072_bf01_6aecd11b8bb9.slice/crio-fc013167f1612e51e7d1488a03a77972fda62f6f01b43c3deb4e923f8df5148b WatchSource:0}: Error finding container fc013167f1612e51e7d1488a03a77972fda62f6f01b43c3deb4e923f8df5148b: Status 404 returned error can't find the container with id fc013167f1612e51e7d1488a03a77972fda62f6f01b43c3deb4e923f8df5148b Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.111330 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.111385 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.111692 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.611664461 +0000 UTC m=+154.355321920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.111942 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.114714 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.614690945 +0000 UTC m=+154.358348584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.123056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7t7x\" (UniqueName: \"kubernetes.io/projected/ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127-kube-api-access-p7t7x\") pod \"ingress-operator-5b745b69d9-fxk99\" (UID: \"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.134874 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.138534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh7h\" (UniqueName: \"kubernetes.io/projected/edc65727-b71e-48c1-bdf4-a72d483b1ca5-kube-api-access-qwh7h\") pod \"csi-hostpathplugin-4frqf\" (UID: \"edc65727-b71e-48c1-bdf4-a72d483b1ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.146103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbs9s\" (UniqueName: \"kubernetes.io/projected/c59f0b48-4a1e-4e03-a641-3abbd1642bbf-kube-api-access-mbs9s\") pod \"olm-operator-6b444d44fb-hz5bw\" (UID: \"c59f0b48-4a1e-4e03-a641-3abbd1642bbf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.151657 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.166873 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.169914 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4h5\" (UniqueName: \"kubernetes.io/projected/6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7-kube-api-access-ln4h5\") pod \"dns-default-8hmxg\" (UID: \"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7\") " pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.178483 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.179535 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfqp2\" (UniqueName: \"kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2\") pod \"marketplace-operator-79b997595-vwdw8\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.185928 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.191517 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.192030 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.212092 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.212638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmnh\" (UniqueName: \"kubernetes.io/projected/f0778220-8d41-435d-9685-9394fd991915-kube-api-access-8tmnh\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfp9l\" (UID: \"f0778220-8d41-435d-9685-9394fd991915\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.213507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.213588 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.7135655 +0000 UTC m=+154.457222949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.217290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.218181 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.718156312 +0000 UTC m=+154.461813771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.218595 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8h7ch" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.250000 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7fd5fa-9b4d-46ad-a556-982e1af7f848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2b26k\" (UID: \"9a7fd5fa-9b4d-46ad-a556-982e1af7f848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.279500 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.319624 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.320426 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.820401411 +0000 UTC m=+154.564058860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.330688 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.353415 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.391993 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.417812 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.421445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.421863 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:31.921844115 +0000 UTC m=+154.665501564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.424613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.430720 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxg7d"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.444787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.497189 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.524862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.525060 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.025031653 +0000 UTC m=+154.768689102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.525557 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.525993 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.025978203 +0000 UTC m=+154.769635642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.618827 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.629002 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.629461 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.129426289 +0000 UTC m=+154.873083738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.661782 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9s5t" podStartSLOduration=126.661748968 podStartE2EDuration="2m6.661748968s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:31.656191346 +0000 UTC m=+154.399848795" watchObservedRunningTime="2026-02-19 08:23:31.661748968 +0000 UTC m=+154.405406417" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.679892 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tfpvq"] Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.731010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.731489 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.231473722 +0000 UTC m=+154.975131171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.738054 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56"] Feb 19 08:23:31 crc kubenswrapper[4780]: W0219 08:23:31.750920 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda01be762_6c87_4e78_b2e9_0f2bb29af8cb.slice/crio-924e91140e7cb1f8167317edad18c3b5782960f1023ade19d4f0607ef6270a82 WatchSource:0}: Error finding container 924e91140e7cb1f8167317edad18c3b5782960f1023ade19d4f0607ef6270a82: Status 404 returned error can't find the container with id 924e91140e7cb1f8167317edad18c3b5782960f1023ade19d4f0607ef6270a82 Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.827807 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" podStartSLOduration=127.827768857 podStartE2EDuration="2m7.827768857s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:31.782760987 +0000 UTC m=+154.526418436" watchObservedRunningTime="2026-02-19 08:23:31.827768857 +0000 UTC m=+154.571426316" Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.834478 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.834689 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.334634149 +0000 UTC m=+155.078291598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.837425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.839827 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.339806219 +0000 UTC m=+155.083463678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.939758 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.940530 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.4404969 +0000 UTC m=+155.184154359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:31 crc kubenswrapper[4780]: I0219 08:23:31.940731 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:31 crc kubenswrapper[4780]: E0219 08:23:31.941300 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.441281915 +0000 UTC m=+155.184939364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.042018 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.042899 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.542871373 +0000 UTC m=+155.286528822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.093613 4780 csr.go:261] certificate signing request csr-whq4f is approved, waiting to be issued Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.112607 4780 csr.go:257] certificate signing request csr-whq4f is issued Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.125522 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mzcjh" event={"ID":"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9","Type":"ContainerStarted","Data":"fc013167f1612e51e7d1488a03a77972fda62f6f01b43c3deb4e923f8df5148b"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.130278 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" event={"ID":"515500df-562c-4659-a3ae-12efcd533619","Type":"ContainerStarted","Data":"ab9e4f54c74b3c865ed275e55018aa461acf6a396d124bb488c3d008ece3cc4e"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.134329 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" event={"ID":"dfdb9fee-9a89-4659-b7d7-6ce459654d9f","Type":"ContainerStarted","Data":"3a297335eadfd008ce5a3bea4159a5c0a83691ebd4001f36e5659ac0987a60f6"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.134384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" event={"ID":"dfdb9fee-9a89-4659-b7d7-6ce459654d9f","Type":"ContainerStarted","Data":"afc524b33537444fa537948829486b9ab4b78dac739d89fd2ae10ca93c093ea7"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.138041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxg7d" event={"ID":"17d08c15-3d45-49f2-85e2-d21567e5e5c3","Type":"ContainerStarted","Data":"e2f7b6781c33d428b01b3cf7fed441aea912f3ffd60b4dd07a736e5a11e92c2b"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.143994 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.144552 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.644537325 +0000 UTC m=+155.388194774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.149414 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" event={"ID":"798e5340-3b97-433e-a55c-88ef57c7b761","Type":"ContainerStarted","Data":"e8c48c1b57bcda6ed91b9058d9a839d8c6ddf95d8ecab593d4578283c7d9e5af"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.149476 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" event={"ID":"798e5340-3b97-433e-a55c-88ef57c7b761","Type":"ContainerStarted","Data":"3e3931b93499c2afdf20135d43cff8a17bc944e38d650cff439a9f71f26c69a5"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.157903 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" event={"ID":"6d24231e-ad6b-496d-b3ff-da7dd94d12fc","Type":"ContainerStarted","Data":"4487366c1b2e4abf8c7da37366967491939c247d7b5f4500c28a8d8d2b25bc8a"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.164457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" event={"ID":"7a37b870-8cc6-418e-ae1b-c8b84a9ca356","Type":"ContainerStarted","Data":"689294e2ddf3706e1c38ce5c3d15cf03183ea967530d011fc73aff24737364bd"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.164500 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" event={"ID":"7a37b870-8cc6-418e-ae1b-c8b84a9ca356","Type":"ContainerStarted","Data":"b1b874ae28cb847dff79245cc8afbb216ce1e0379084158a18b63b13ecc39075"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.180971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" event={"ID":"d487a323-7532-4d54-9a3d-0cab7876247f","Type":"ContainerStarted","Data":"acfbb2be7c50b79ab6d76a5e5c9c2f9a457733707f94a1c1c9f0644cd735ed51"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.192078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" event={"ID":"8ce275f1-b63d-4597-8680-e96315dded0c","Type":"ContainerStarted","Data":"c061adfac06f7da6479b469c106a642756825b4504c7f52afd6b085ddeec5bfc"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.192148 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.193751 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kswgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.193820 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.198853 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" event={"ID":"0a9ee3ae-5c55-4682-af7d-4ff31566df16","Type":"ContainerStarted","Data":"3c07fd1ffd15a1d987b8a359bc4c5940dd231a26a83093d675b1b83ba4a82689"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.204261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p664d" event={"ID":"8431537e-f827-40a6-8be5-836d4b203c22","Type":"ContainerStarted","Data":"edcf531f01468857076345622d287ef1226c3a87ab293c44c8756139544e3324"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.204323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p664d" event={"ID":"8431537e-f827-40a6-8be5-836d4b203c22","Type":"ContainerStarted","Data":"d38054468950bd2c7c06ba593742aba8860d5b24b2c611dd35fdf86baedd1a58"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.205320 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.207101 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.207338 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.208615 4780 generic.go:334] "Generic (PLEG): container finished" podID="efdec686-8e3a-4566-b46b-a2d6f4c48648" containerID="4dfa07c97388ec669c21779cc020ffb51b8e1c962d9080adef66647592dc8c83" exitCode=0 Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.208701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" event={"ID":"efdec686-8e3a-4566-b46b-a2d6f4c48648","Type":"ContainerDied","Data":"4dfa07c97388ec669c21779cc020ffb51b8e1c962d9080adef66647592dc8c83"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.209766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" event={"ID":"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f","Type":"ContainerStarted","Data":"ec993194ff827a5da9d31ea2e603906c938d46c3b4a05e24bf4ac00887fb9329"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.211619 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" event={"ID":"75601b03-beff-4f9d-b191-56519a5c73a4","Type":"ContainerStarted","Data":"da1b35c1645bab0e031110f9d85ff191969ebc27686dffb8744c89c0c8e3db83"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.213286 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" event={"ID":"a01be762-6c87-4e78-b2e9-0f2bb29af8cb","Type":"ContainerStarted","Data":"924e91140e7cb1f8167317edad18c3b5782960f1023ade19d4f0607ef6270a82"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.215442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" event={"ID":"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7","Type":"ContainerStarted","Data":"0cfd3f4268682277c6bbd8d3f932c380d38aee20accabd4b4799d78c37bb9894"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.217925 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" event={"ID":"42b42bbb-ed14-474c-b81d-9fa63e652886","Type":"ContainerStarted","Data":"6f34e526f2b727938b11ce134bfabd35db79a514e347ef0d5c1fa5d666b1cac2"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.224204 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s45d4" event={"ID":"a04d00b6-08f3-4210-9251-83466c020e6c","Type":"ContainerStarted","Data":"7dc2a8e3a3ead8d98fbfbbca06e6c570549f56d559e81c264974117843fc48f4"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.229967 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-g6dcx" event={"ID":"b57f85fe-b6a8-4e85-902e-bc8227fac331","Type":"ContainerStarted","Data":"991cc460e56174a4d39821eba4f7e8acd88930ba7df9cd726ebe89ecac8500e5"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.233489 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" event={"ID":"08448379-9a6b-4daa-8f71-c4087e4c4553","Type":"ContainerStarted","Data":"77542430afbfed7528bb7188dfa659e1a1a33829ed8f96c351d61b12ac2a280c"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.252938 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.254800 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.754772951 +0000 UTC m=+155.498430460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.260051 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.263829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" event={"ID":"66917461-2afb-4a36-83fe-4ff8a0be77f8","Type":"ContainerStarted","Data":"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.263872 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" event={"ID":"66917461-2afb-4a36-83fe-4ff8a0be77f8","Type":"ContainerStarted","Data":"f21246dff2bc491e8802638462191f465e73ab19e9156a75758dabea1b6a39e9"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.266235 4780 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q9p69 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.266269 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.279186 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" event={"ID":"059b8dda-8090-4773-b541-544c7ae97dc7","Type":"ContainerStarted","Data":"df091d3e03d7847dcb43e7f142aec727be2263956efa21fdf5b11935b5a319dd"} Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.345002 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xt4l6" podStartSLOduration=128.344978688 podStartE2EDuration="2m8.344978688s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:32.340658794 +0000 UTC m=+155.084316243" watchObservedRunningTime="2026-02-19 08:23:32.344978688 +0000 UTC m=+155.088636137" Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.365988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.368318 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.868296548 +0000 UTC m=+155.611953997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.398098 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.405247 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrrt4"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.430698 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8hmxg"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.467641 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.467912 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.967884545 +0000 UTC m=+155.711542004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.468088 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.469290 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:32.969270818 +0000 UTC m=+155.712928267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: W0219 08:23:32.524157 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a379ce4_1016_48f8_beb7_c20b3c014839.slice/crio-b3355b6a1eb541768ef89a69770e6a5a8bf601617ef708bd3801cfb9f8f5185d WatchSource:0}: Error finding container b3355b6a1eb541768ef89a69770e6a5a8bf601617ef708bd3801cfb9f8f5185d: Status 404 returned error can't find the container with id b3355b6a1eb541768ef89a69770e6a5a8bf601617ef708bd3801cfb9f8f5185d Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.569936 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.570350 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.070332901 +0000 UTC m=+155.813990350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.690359 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.691140 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.191111513 +0000 UTC m=+155.934768962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.702984 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6q46"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.731437 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.795395 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.795857 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.295841748 +0000 UTC m=+156.039499197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.899060 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:32 crc kubenswrapper[4780]: E0219 08:23:32.899825 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.399812811 +0000 UTC m=+156.143470250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.905604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4frqf"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.942300 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.955144 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x"] Feb 19 08:23:32 crc kubenswrapper[4780]: W0219 08:23:32.966765 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62c186f_40e9_48a9_9af3_2d44d3aa867f.slice/crio-7c89e9acc264fbe9c22ba69b5b037c14212c9d871a01d69bfa132405ee751ad8 WatchSource:0}: Error finding container 7c89e9acc264fbe9c22ba69b5b037c14212c9d871a01d69bfa132405ee751ad8: Status 404 returned error can't find the container with id 7c89e9acc264fbe9c22ba69b5b037c14212c9d871a01d69bfa132405ee751ad8 Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.970100 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99"] Feb 19 08:23:32 crc kubenswrapper[4780]: I0219 08:23:32.987727 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg"] Feb 19 08:23:33 crc kubenswrapper[4780]: W0219 08:23:33.000493 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec31ad06_7520_433e_b137_b8e4a3fcd686.slice/crio-2d1678b992bd91556cd81b10105968e41b10649527eb10bb0c4d2d37577004ef WatchSource:0}: Error finding container 2d1678b992bd91556cd81b10105968e41b10649527eb10bb0c4d2d37577004ef: Status 404 returned error can't find the container with id 2d1678b992bd91556cd81b10105968e41b10649527eb10bb0c4d2d37577004ef Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.019173 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.020148 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.520114318 +0000 UTC m=+156.263771767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.020502 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jsm4b"] Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.024235 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kq8m2" podStartSLOduration=128.024220195 podStartE2EDuration="2m8.024220195s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.022630366 +0000 UTC m=+155.766287815" watchObservedRunningTime="2026-02-19 08:23:33.024220195 +0000 UTC m=+155.767877644" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.089268 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-p664d" podStartSLOduration=128.089243474 podStartE2EDuration="2m8.089243474s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.082253338 +0000 UTC m=+155.825910797" watchObservedRunningTime="2026-02-19 08:23:33.089243474 +0000 UTC m=+155.832900923" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.100661 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k"] Feb 19 08:23:33 crc kubenswrapper[4780]: W0219 08:23:33.092674 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2199c38e_d39e_4bee_8ea9_4ab5672a0e36.slice/crio-984f68061ff6311bcb2d15b197f03658794dcf01317c7ce04bd04369b3c0debc WatchSource:0}: Error finding container 984f68061ff6311bcb2d15b197f03658794dcf01317c7ce04bd04369b3c0debc: Status 404 returned error can't find the container with id 984f68061ff6311bcb2d15b197f03658794dcf01317c7ce04bd04369b3c0debc Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.114242 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podStartSLOduration=128.114211275 podStartE2EDuration="2m8.114211275s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.110590533 +0000 UTC m=+155.854247982" watchObservedRunningTime="2026-02-19 08:23:33.114211275 +0000 UTC m=+155.857868724" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.115306 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 08:18:32 +0000 UTC, rotation deadline is 2026-12-22 01:10:10.668699366 +0000 UTC Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.115360 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7336h46m37.553341155s for next certificate rotation Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.124184 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.124576 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.624563545 +0000 UTC m=+156.368220984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.128653 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.132846 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l"] Feb 19 08:23:33 crc kubenswrapper[4780]: W0219 08:23:33.157462 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7fd5fa_9b4d_46ad_a556_982e1af7f848.slice/crio-7973ea7c9f8350cc7e681b6a75c83f64e909f83fa63d90319328f914d121170a WatchSource:0}: Error finding container 7973ea7c9f8350cc7e681b6a75c83f64e909f83fa63d90319328f914d121170a: Status 404 returned error can't find the container with id 7973ea7c9f8350cc7e681b6a75c83f64e909f83fa63d90319328f914d121170a Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.168909 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r"] Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.177651 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" podStartSLOduration=129.177566303 podStartE2EDuration="2m9.177566303s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.176876562 +0000 UTC m=+155.920534001" watchObservedRunningTime="2026-02-19 08:23:33.177566303 +0000 UTC m=+155.921223752" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.214851 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw"] Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.222007 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-972bd" podStartSLOduration=128.221737218 podStartE2EDuration="2m8.221737218s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.214592767 +0000 UTC m=+155.958250216" watchObservedRunningTime="2026-02-19 08:23:33.221737218 +0000 UTC m=+155.965394667" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.224738 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.224857 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.724837593 +0000 UTC m=+156.468495042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.225619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.225979 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.725959488 +0000 UTC m=+156.469616927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: W0219 08:23:33.230824 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf841ab7c_b591_480d_8c4a_70003c08e679.slice/crio-b8a0def4d9f2f147e0cbf871582df300d2785329b92e8d487c56e0a27470a072 WatchSource:0}: Error finding container b8a0def4d9f2f147e0cbf871582df300d2785329b92e8d487c56e0a27470a072: Status 404 returned error can't find the container with id b8a0def4d9f2f147e0cbf871582df300d2785329b92e8d487c56e0a27470a072 Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.293905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" event={"ID":"059b8dda-8090-4773-b541-544c7ae97dc7","Type":"ContainerStarted","Data":"03dbc41d529316943266aa05376441bb4499cddda7dde9025386371a91a9c9d2"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.299450 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" event={"ID":"8433aeaf-86c7-4f3a-b2c2-3e402450ee89","Type":"ContainerStarted","Data":"a33720ff5cdad951466e511202580712fab9b652ad40aba36ed386c52e2919cb"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.303086 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" event={"ID":"5ff6bb32-c2a1-4e4a-bfa0-f4193a90832f","Type":"ContainerStarted","Data":"cdf05aa53d331179efa15346d8408f199e43b7dbff1bc4af24d9ba79058fafc8"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.304283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" event={"ID":"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1","Type":"ContainerStarted","Data":"66fd14ac870bae5a6da4399b59e80394dd6d7cf1073b6fffa9c0e9b990a230e0"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.312914 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" event={"ID":"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c","Type":"ContainerStarted","Data":"6758e3f508ce643f62f515e0803c9accae71f7b2b7945aa9d51c44f843534662"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.329138 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7fp6" podStartSLOduration=129.329099565 podStartE2EDuration="2m9.329099565s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.25807395 +0000 UTC m=+156.001731399" watchObservedRunningTime="2026-02-19 08:23:33.329099565 +0000 UTC m=+156.072757004" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.329541 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-n7vfp" podStartSLOduration=128.329536278 podStartE2EDuration="2m8.329536278s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.328507047 +0000 UTC m=+156.072164496" watchObservedRunningTime="2026-02-19 08:23:33.329536278 +0000 UTC m=+156.073193727" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.333489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.334914 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.834885164 +0000 UTC m=+156.578542613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.336878 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" event={"ID":"d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7","Type":"ContainerStarted","Data":"59b578919bdcb375fcef7b5537617f173c02a189da1efc8c00498269e7acc095"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.366689 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerStarted","Data":"b8a0def4d9f2f147e0cbf871582df300d2785329b92e8d487c56e0a27470a072"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.377674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" event={"ID":"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae","Type":"ContainerStarted","Data":"1165cfd74b0289f49929edeb0a56b8e476d947503328f649381781080bece504"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.389180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" event={"ID":"f0778220-8d41-435d-9685-9394fd991915","Type":"ContainerStarted","Data":"21c6982481b7a954a452b8a531b449cd1e37cabc93e77764a1121ab2ef05af1e"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.394619 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" event={"ID":"ec31ad06-7520-433e-b137-b8e4a3fcd686","Type":"ContainerStarted","Data":"2d1678b992bd91556cd81b10105968e41b10649527eb10bb0c4d2d37577004ef"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.413886 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" event={"ID":"6d24231e-ad6b-496d-b3ff-da7dd94d12fc","Type":"ContainerStarted","Data":"e9904e9a3a211925fcaec7f31875c1c65ca8b46cc9b45b1a42c4e305c39ce3c0"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.415101 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.417195 4780 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bqpdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.417253 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.419552 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" event={"ID":"42b42bbb-ed14-474c-b81d-9fa63e652886","Type":"ContainerStarted","Data":"eb5f0e3692afd26454c41cbd4e80c9f582febec9f06441bd149a3f21d4c301ce"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.420532 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.421617 4780 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-sqsb4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.421682 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" podUID="42b42bbb-ed14-474c-b81d-9fa63e652886" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.426502 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" event={"ID":"7a379ce4-1016-48f8-beb7-c20b3c014839","Type":"ContainerStarted","Data":"b3355b6a1eb541768ef89a69770e6a5a8bf601617ef708bd3801cfb9f8f5185d"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.431029 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" event={"ID":"cf863c82-61de-4465-acee-65c52424e261","Type":"ContainerStarted","Data":"1c91527b655877321daa030614b4cca61605a47e02a96f38de6445d643cf8ea1"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.432965 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" podStartSLOduration=128.432953484 podStartE2EDuration="2m8.432953484s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.431582141 +0000 UTC m=+156.175239590" watchObservedRunningTime="2026-02-19 08:23:33.432953484 +0000 UTC m=+156.176610933" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.433093 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjf56" podStartSLOduration=128.433086948 podStartE2EDuration="2m8.433086948s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.357944956 +0000 UTC m=+156.101602405" watchObservedRunningTime="2026-02-19 08:23:33.433086948 +0000 UTC m=+156.176744407" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.435711 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.436296 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:33.936266866 +0000 UTC m=+156.679924315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.448376 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxg7d" event={"ID":"17d08c15-3d45-49f2-85e2-d21567e5e5c3","Type":"ContainerStarted","Data":"1f5e8070b1252212779a82f4234dd80f2b438d87902878808d6e75269dce54e2"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.457478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8hmxg" event={"ID":"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7","Type":"ContainerStarted","Data":"0144f249577b1802d4acc07c979eeff18a4579b1af49826efaaf2e2336401049"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.460551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" event={"ID":"75601b03-beff-4f9d-b191-56519a5c73a4","Type":"ContainerStarted","Data":"ff9d4800f3b81137aea3c1f7559725688a2d09f07fa2b4f5075afda89ddd7cbe"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.461561 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" podStartSLOduration=128.461534347 podStartE2EDuration="2m8.461534347s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.458265406 +0000 UTC m=+156.201922855" watchObservedRunningTime="2026-02-19 08:23:33.461534347 +0000 UTC m=+156.205191796" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.471093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6q46" event={"ID":"d62c186f-40e9-48a9-9af3-2d44d3aa867f","Type":"ContainerStarted","Data":"7c89e9acc264fbe9c22ba69b5b037c14212c9d871a01d69bfa132405ee751ad8"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.474941 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" event={"ID":"515500df-562c-4659-a3ae-12efcd533619","Type":"ContainerStarted","Data":"6ce1efd78908d18987381926ce8064e0c4fe3c4148adccb538505ef89500117c"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.476919 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" event={"ID":"9a7fd5fa-9b4d-46ad-a556-982e1af7f848","Type":"ContainerStarted","Data":"7973ea7c9f8350cc7e681b6a75c83f64e909f83fa63d90319328f914d121170a"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.482182 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8h7ch" event={"ID":"c3e7e898-b119-4a31-8073-99ab35588548","Type":"ContainerStarted","Data":"4eab0436db3e3e059519f6e539d6fcf6e45dbeda839feeceb23ce83a0b34b62a"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.482246 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8h7ch" event={"ID":"c3e7e898-b119-4a31-8073-99ab35588548","Type":"ContainerStarted","Data":"3bdfdf43c34e7fd2a40908b5a87a39970dc98fc67b220665e575983145112e88"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.488076 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" event={"ID":"edc65727-b71e-48c1-bdf4-a72d483b1ca5","Type":"ContainerStarted","Data":"3758e126248d05c284f5c6c3053328dfafb90273891692f515ae22571a236ccb"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.503802 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-g6dcx" event={"ID":"b57f85fe-b6a8-4e85-902e-bc8227fac331","Type":"ContainerStarted","Data":"99e10ed3ec02a9df16006c9573360defffc63a9a28535e37359ec7d1d45f5ff3"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.506882 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" event={"ID":"08448379-9a6b-4daa-8f71-c4087e4c4553","Type":"ContainerStarted","Data":"e12ad4d222ddf8afb76bc5bd1b237963aafe12b2a74c9a63fceffeb1274eeab3"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.507328 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.512656 4780 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5t7th container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.512712 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" podUID="08448379-9a6b-4daa-8f71-c4087e4c4553" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.513289 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pxg7d" podStartSLOduration=5.513258655 podStartE2EDuration="5.513258655s" podCreationTimestamp="2026-02-19 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.484728683 +0000 UTC m=+156.228386132" watchObservedRunningTime="2026-02-19 08:23:33.513258655 +0000 UTC m=+156.256916114" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.514741 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ltjh9" podStartSLOduration=128.51473131 podStartE2EDuration="2m8.51473131s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.511216252 +0000 UTC m=+156.254873711" watchObservedRunningTime="2026-02-19 08:23:33.51473131 +0000 UTC m=+156.258388759" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.532270 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mzcjh" event={"ID":"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9","Type":"ContainerStarted","Data":"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.549229 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.552248 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8h7ch" podStartSLOduration=5.552211449 podStartE2EDuration="5.552211449s" podCreationTimestamp="2026-02-19 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.546705228 +0000 UTC m=+156.290362677" watchObservedRunningTime="2026-02-19 08:23:33.552211449 +0000 UTC m=+156.295868908" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.549992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" event={"ID":"efdec686-8e3a-4566-b46b-a2d6f4c48648","Type":"ContainerStarted","Data":"c091c1817696b3e6d3f7b4323907d07d456568ee62c761aaed2d0be06b97204b"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.555297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.554359 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.053451247 +0000 UTC m=+156.797108696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.587155 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" event={"ID":"2199c38e-d39e-4bee-8ea9-4ab5672a0e36","Type":"ContainerStarted","Data":"984f68061ff6311bcb2d15b197f03658794dcf01317c7ce04bd04369b3c0debc"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.589940 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fdh22" podStartSLOduration=128.589922784 podStartE2EDuration="2m8.589922784s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.5830038 +0000 UTC m=+156.326661239" watchObservedRunningTime="2026-02-19 08:23:33.589922784 +0000 UTC m=+156.333580233" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.595508 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" event={"ID":"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127","Type":"ContainerStarted","Data":"c3e35d7d5b82ee1ac9c8df5df714b71e0fb722ab9ea2977cdc60ef833401364a"} Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.595965 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.596020 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.597968 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kswgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.598037 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.621722 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-g6dcx" podStartSLOduration=128.621699726 podStartE2EDuration="2m8.621699726s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.619222029 +0000 UTC m=+156.362879478" watchObservedRunningTime="2026-02-19 08:23:33.621699726 +0000 UTC m=+156.365357175" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.655482 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.658446 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.15843013 +0000 UTC m=+156.902087769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.672516 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mzcjh" podStartSLOduration=128.672488585 podStartE2EDuration="2m8.672488585s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.662832516 +0000 UTC m=+156.406489975" watchObservedRunningTime="2026-02-19 08:23:33.672488585 +0000 UTC m=+156.416146034" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.706882 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" podStartSLOduration=128.706827906 podStartE2EDuration="2m8.706827906s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.698892571 +0000 UTC m=+156.442550020" watchObservedRunningTime="2026-02-19 08:23:33.706827906 +0000 UTC m=+156.450485355" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.756742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.757117 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.257084539 +0000 UTC m=+157.000741988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.757717 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.761804 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.261772073 +0000 UTC m=+157.005429522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.859489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.860817 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.360770311 +0000 UTC m=+157.104427760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.962527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:33 crc kubenswrapper[4780]: E0219 08:23:33.963031 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.46301321 +0000 UTC m=+157.206670659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.991950 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.997437 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 08:23:33 crc kubenswrapper[4780]: I0219 08:23:33.997517 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.064961 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.065307 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.5652845 +0000 UTC m=+157.308941949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.166847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.167674 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.667660443 +0000 UTC m=+157.411317892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.197992 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.248258 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" podStartSLOduration=129.245361204 podStartE2EDuration="2m9.245361204s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:33.744448388 +0000 UTC m=+156.488105837" watchObservedRunningTime="2026-02-19 08:23:34.245361204 +0000 UTC m=+156.989018653" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.273850 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.274733 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.774714701 +0000 UTC m=+157.518372150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.375577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.377378 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.877364773 +0000 UTC m=+157.621022222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.481371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.482474 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:34.98244747 +0000 UTC m=+157.726104919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.585613 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.586058 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.08603983 +0000 UTC m=+157.829697279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.604754 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" event={"ID":"cf863c82-61de-4465-acee-65c52424e261","Type":"ContainerStarted","Data":"ab2e295f0f9e6ca6ca3410d3a3d79808389ac53cc6fbe59db2f6d1c44f61d601"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.609264 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" event={"ID":"2199c38e-d39e-4bee-8ea9-4ab5672a0e36","Type":"ContainerStarted","Data":"76a833497b7a7228baa74411877baaca810764e4abf70ab5f9aecd68245e7351"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.609306 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" event={"ID":"2199c38e-d39e-4bee-8ea9-4ab5672a0e36","Type":"ContainerStarted","Data":"eddb33070b011102b3eb260af99f0c4dc19a3495be8f902cb5a6528faa3e72b4"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.614454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" event={"ID":"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127","Type":"ContainerStarted","Data":"606cb9a73b7fa59fbf3b04270a1d2d3acf000e434c0f58a515732da76cdbb80d"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.614513 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" event={"ID":"ccfb8d3f-a510-4f8a-a81b-02d3d0dd5127","Type":"ContainerStarted","Data":"f3370ec87ed361a063932f1df6b07faefc5f0dae65c7e0903a731ba6472033be"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.618583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" event={"ID":"7a379ce4-1016-48f8-beb7-c20b3c014839","Type":"ContainerStarted","Data":"a50e0b30a6d53b92a83cd6eda7fe9672cb913d916ab86beea99901a44be8cbaf"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.627354 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerStarted","Data":"2d4a12b15001128752f6ecdaf4ade6b48319237518cb4ed1e2ff07d817fcec2d"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.627570 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.630011 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vwdw8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.630077 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.632253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" event={"ID":"d487a323-7532-4d54-9a3d-0cab7876247f","Type":"ContainerStarted","Data":"930c9c4897b93ef16733d6aeafcb523ea6d59cc9049c2885fcd6422254a4f34c"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.648999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" event={"ID":"c59f0b48-4a1e-4e03-a641-3abbd1642bbf","Type":"ContainerStarted","Data":"a37a1a46266ef11a75db0817854302fa11814c85bc1e3c4b4e2461d3d6c25abf"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.649056 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" event={"ID":"c59f0b48-4a1e-4e03-a641-3abbd1642bbf","Type":"ContainerStarted","Data":"0b95121b41be70ff017084b1c9b29a7322e5d69b3345b8671c5c87d09ece3304"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.650185 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.654552 4780 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hz5bw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.654613 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" podUID="c59f0b48-4a1e-4e03-a641-3abbd1642bbf" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.666497 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6q46" event={"ID":"d62c186f-40e9-48a9-9af3-2d44d3aa867f","Type":"ContainerStarted","Data":"9423383cee0b4b0d84e5eae1173a7aba23dcc35837000f2ba1c58259be973e82"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.667514 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.669791 4780 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6q46 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.670208 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6q46" podUID="d62c186f-40e9-48a9-9af3-2d44d3aa867f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.687942 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.688371 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.188356982 +0000 UTC m=+157.932014431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.724749 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" event={"ID":"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1","Type":"ContainerStarted","Data":"b7d5d61c03ee750c26b70e02d00c4dc7f458f8f36ee0c50cacfb84d12feaac6d"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.724801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" event={"ID":"33bc9c54-7f7d-4c00-b0db-4a4ee27cb5c1","Type":"ContainerStarted","Data":"e271efc743e82d5bbd726dfff92726d58d7b735dac82d41a7d1995c2db31eb1a"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.744430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" event={"ID":"ec31ad06-7520-433e-b137-b8e4a3fcd686","Type":"ContainerStarted","Data":"2d96cc66e8aa2524e393f3466dbd17e73fe7e4e63278bdc7f9f638d5cf581b5c"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.757763 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" podStartSLOduration=129.757731415 podStartE2EDuration="2m9.757731415s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.725020835 +0000 UTC m=+157.468678284" watchObservedRunningTime="2026-02-19 08:23:34.757731415 +0000 UTC m=+157.501388864" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.759157 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wjp8x" podStartSLOduration=129.759149219 podStartE2EDuration="2m9.759149219s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.653641509 +0000 UTC m=+157.397298958" watchObservedRunningTime="2026-02-19 08:23:34.759149219 +0000 UTC m=+157.502806688" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.795338 4780 generic.go:334] "Generic (PLEG): container finished" podID="79d7a2e4-9a84-4f25-aa19-a07b107cfd4c" containerID="8426323f94283ae3d17fd8636042bb94b828ce436120349933eff1cdbf703e45" exitCode=0 Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.796037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" event={"ID":"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c","Type":"ContainerDied","Data":"8426323f94283ae3d17fd8636042bb94b828ce436120349933eff1cdbf703e45"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.796057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.799075 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.299061202 +0000 UTC m=+158.042718651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.808958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8hmxg" event={"ID":"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7","Type":"ContainerStarted","Data":"d7abe8ae389bfc06b48ca721cf89532df51cf84650815a0792b4f84d3c4d95c4"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.809031 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8hmxg" event={"ID":"6abbcb7a-fa5e-4e9a-bab8-d58fda9b3db7","Type":"ContainerStarted","Data":"92d3d4c6cd48fc5efcd97b13e06c6beca14d7553d616064b92acc7ad58804ccd"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.809931 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.827095 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" event={"ID":"b1af9d3f-6867-48bd-8a4c-892d5ba5a0ae","Type":"ContainerStarted","Data":"6fed992d74a3875f1605766768b5fadc6348e9921042c133fc158ce01ed8b76f"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.832240 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4cwqg" podStartSLOduration=129.832215197 podStartE2EDuration="2m9.832215197s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.780399306 +0000 UTC m=+157.524056755" watchObservedRunningTime="2026-02-19 08:23:34.832215197 +0000 UTC m=+157.575872646" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.834173 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" podStartSLOduration=129.834164487 podStartE2EDuration="2m9.834164487s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.833001781 +0000 UTC m=+157.576659230" watchObservedRunningTime="2026-02-19 08:23:34.834164487 +0000 UTC m=+157.577821936" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.859464 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" event={"ID":"0a9ee3ae-5c55-4682-af7d-4ff31566df16","Type":"ContainerStarted","Data":"d0f27ea7c94e188222650466570128b57d3d6a0edb142c7ba17784e5ff5b67ae"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.859526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" event={"ID":"0a9ee3ae-5c55-4682-af7d-4ff31566df16","Type":"ContainerStarted","Data":"126b54c7359e8a4779cf836db2653ff191a4bb9982ab2228223f84015761cd40"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.869532 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxk99" podStartSLOduration=129.869513669 podStartE2EDuration="2m9.869513669s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.867836537 +0000 UTC m=+157.611493986" watchObservedRunningTime="2026-02-19 08:23:34.869513669 +0000 UTC m=+157.613171118" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.877419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" event={"ID":"f0778220-8d41-435d-9685-9394fd991915","Type":"ContainerStarted","Data":"ba5c144c66f395be899f20eb889076e96fd45fb3b1356476e358f6372d37f011"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.903779 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:34 crc kubenswrapper[4780]: E0219 08:23:34.910039 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.41001659 +0000 UTC m=+158.153674039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.930445 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" event={"ID":"059b8dda-8090-4773-b541-544c7ae97dc7","Type":"ContainerStarted","Data":"d506e9f8fc56488c1ac39a251b9c3a22c7fb53df4252138e467b7c83bf9126db"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.957927 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" event={"ID":"8433aeaf-86c7-4f3a-b2c2-3e402450ee89","Type":"ContainerStarted","Data":"de3197054d9a29ecfa71e93917d676aa0c275566f0a451657a1a3a949a44dc66"} Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.972600 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljdwz" podStartSLOduration=129.972557503 podStartE2EDuration="2m9.972557503s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.902690914 +0000 UTC m=+157.646348383" watchObservedRunningTime="2026-02-19 08:23:34.972557503 +0000 UTC m=+157.716214952" Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.984465 4780 generic.go:334] "Generic (PLEG): container finished" podID="a01be762-6c87-4e78-b2e9-0f2bb29af8cb" containerID="2f7a38c62cc65c309b2abf4cddd0e818f83bf11e8c4ac56ba064d9b842357379" exitCode=0 Feb 19 08:23:34 crc kubenswrapper[4780]: I0219 08:23:34.984601 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" event={"ID":"a01be762-6c87-4e78-b2e9-0f2bb29af8cb","Type":"ContainerDied","Data":"2f7a38c62cc65c309b2abf4cddd0e818f83bf11e8c4ac56ba064d9b842357379"} Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.001642 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:35 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:35 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:35 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.001704 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.005580 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.006302 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.506280545 +0000 UTC m=+158.249938004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.014820 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlq4r" podStartSLOduration=130.014797328 podStartE2EDuration="2m10.014797328s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:34.979098795 +0000 UTC m=+157.722756384" watchObservedRunningTime="2026-02-19 08:23:35.014797328 +0000 UTC m=+157.758454777" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.023279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" event={"ID":"9a7fd5fa-9b4d-46ad-a556-982e1af7f848","Type":"ContainerStarted","Data":"ddd062dc51fc7e0fd6ad0de053e93c4f7358aa50b4eb1a9a2463441393cec350"} Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.065384 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfp9l" podStartSLOduration=130.06536553 podStartE2EDuration="2m10.06536553s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.064608067 +0000 UTC m=+157.808265516" watchObservedRunningTime="2026-02-19 08:23:35.06536553 +0000 UTC m=+157.809022979" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.067426 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jsm4b" podStartSLOduration=130.067417924 podStartE2EDuration="2m10.067417924s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.024454326 +0000 UTC m=+157.768111785" watchObservedRunningTime="2026-02-19 08:23:35.067417924 +0000 UTC m=+157.811075373" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.107654 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.108596 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.608549775 +0000 UTC m=+158.352207374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.127289 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8hmxg" podStartSLOduration=8.127253183 podStartE2EDuration="8.127253183s" podCreationTimestamp="2026-02-19 08:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.126605033 +0000 UTC m=+157.870262482" watchObservedRunningTime="2026-02-19 08:23:35.127253183 +0000 UTC m=+157.870910632" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.190786 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sqsb4" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.214212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.216499 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.71648409 +0000 UTC m=+158.460141539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.235603 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.238978 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tfpvq" podStartSLOduration=130.238959114 podStartE2EDuration="2m10.238959114s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.238761828 +0000 UTC m=+157.982419277" watchObservedRunningTime="2026-02-19 08:23:35.238959114 +0000 UTC m=+157.982616573" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.302079 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2b26k" podStartSLOduration=130.302061814 podStartE2EDuration="2m10.302061814s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.299447003 +0000 UTC m=+158.043104452" watchObservedRunningTime="2026-02-19 08:23:35.302061814 +0000 UTC m=+158.045719263" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.305618 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j6q46" podStartSLOduration=130.305591193 podStartE2EDuration="2m10.305591193s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.270419016 +0000 UTC m=+158.014076465" watchObservedRunningTime="2026-02-19 08:23:35.305591193 +0000 UTC m=+158.049248642" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.322741 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.323240 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.823220328 +0000 UTC m=+158.566877777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.380040 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" podStartSLOduration=131.380023193 podStartE2EDuration="2m11.380023193s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.377505825 +0000 UTC m=+158.121163274" watchObservedRunningTime="2026-02-19 08:23:35.380023193 +0000 UTC m=+158.123680642" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.424852 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.425257 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:35.92524484 +0000 UTC m=+158.668902289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.488753 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6jldl" podStartSLOduration=130.488732771 podStartE2EDuration="2m10.488732771s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:35.487524384 +0000 UTC m=+158.231181833" watchObservedRunningTime="2026-02-19 08:23:35.488732771 +0000 UTC m=+158.232390220" Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.546834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.547506 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.047486447 +0000 UTC m=+158.791143896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.653104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.653760 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.15374776 +0000 UTC m=+158.897405209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.755185 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.755561 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.255529685 +0000 UTC m=+158.999187124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.856570 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.856977 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.356961449 +0000 UTC m=+159.100618898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.957663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:35 crc kubenswrapper[4780]: E0219 08:23:35.958032 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.458015021 +0000 UTC m=+159.201672460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.995366 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:35 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:35 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:35 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:35 crc kubenswrapper[4780]: I0219 08:23:35.995438 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.028819 4780 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5t7th container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.028875 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" podUID="08448379-9a6b-4daa-8f71-c4087e4c4553" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.043339 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" event={"ID":"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c","Type":"ContainerStarted","Data":"527681debc7884d7e36b06e47bbeea35dcba2be4aeec48f2077fe457852c0444"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.043398 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" event={"ID":"79d7a2e4-9a84-4f25-aa19-a07b107cfd4c","Type":"ContainerStarted","Data":"cf9aa1bbdd851e7ded47dbd008cbe853b7659d169683fafbc971ec3d36599f71"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.056019 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.056492 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.057783 4780 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xrrt4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.057833 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" podUID="79d7a2e4-9a84-4f25-aa19-a07b107cfd4c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.059072 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.059479 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.559462436 +0000 UTC m=+159.303119885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.060047 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" event={"ID":"a01be762-6c87-4e78-b2e9-0f2bb29af8cb","Type":"ContainerStarted","Data":"24b1385c0eaf55a7edcb6f565370691dc3b0f85ade252d1edaf56330709d196d"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.078687 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" event={"ID":"cf863c82-61de-4465-acee-65c52424e261","Type":"ContainerStarted","Data":"8c5ad5c9c604150269340cd916c2b22ced8959be6cc1b7c712acbe5f9d569614"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.079093 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.084963 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" event={"ID":"edc65727-b71e-48c1-bdf4-a72d483b1ca5","Type":"ContainerStarted","Data":"905820d7753f8c1221fc37a5539e31aca690d534488f74d327abda635c2163d7"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.099514 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" event={"ID":"ec31ad06-7520-433e-b137-b8e4a3fcd686","Type":"ContainerStarted","Data":"9e01984f2ace77a29aba0a430adb4b0b3fe72eef7f2f9dc327d3935e69df6e0f"} Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.108297 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vwdw8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.108375 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.140727 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" podStartSLOduration=132.140699426 podStartE2EDuration="2m12.140699426s" podCreationTimestamp="2026-02-19 08:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:36.088462882 +0000 UTC m=+158.832120331" watchObservedRunningTime="2026-02-19 08:23:36.140699426 +0000 UTC m=+158.884356875" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.142866 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" podStartSLOduration=131.142846382 podStartE2EDuration="2m11.142846382s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:36.135767503 +0000 UTC m=+158.879424942" watchObservedRunningTime="2026-02-19 08:23:36.142846382 +0000 UTC m=+158.886503831" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.161831 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.162967 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.662947883 +0000 UTC m=+159.406605332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.206202 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hz5bw" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.248608 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rhlxz" podStartSLOduration=131.248582699 podStartE2EDuration="2m11.248582699s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:36.247564178 +0000 UTC m=+158.991221627" watchObservedRunningTime="2026-02-19 08:23:36.248582699 +0000 UTC m=+158.992240148" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.249519 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" podStartSLOduration=131.249512828 podStartE2EDuration="2m11.249512828s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:36.197751318 +0000 UTC m=+158.941408757" watchObservedRunningTime="2026-02-19 08:23:36.249512828 +0000 UTC m=+158.993170277" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.269359 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.275449 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsvc2" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.276954 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.776937125 +0000 UTC m=+159.520594574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.336227 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.336306 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.372246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.373505 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.873488648 +0000 UTC m=+159.617146097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.473221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.473620 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:36.973606982 +0000 UTC m=+159.717264431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.574480 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.574789 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.074765157 +0000 UTC m=+159.818422616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.630186 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.631773 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.634270 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.669461 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.686737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.686794 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.686828 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gq4w\" (UniqueName: \"kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.686885 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.687345 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.187327955 +0000 UTC m=+159.930985404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.788779 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.789029 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.288989496 +0000 UTC m=+160.032646945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.789160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.789377 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.789583 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.289566474 +0000 UTC m=+160.033223923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.789924 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.790002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.790296 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.790440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gq4w\" (UniqueName: \"kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.822043 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.823578 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.823585 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gq4w\" (UniqueName: \"kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w\") pod \"certified-operators-qhjsh\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.825994 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.836142 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.891893 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.892107 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.392065341 +0000 UTC m=+160.135722790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.892276 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.892310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.892331 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.892376 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5mr\" (UniqueName: \"kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.892735 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.392714501 +0000 UTC m=+160.136371940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.947412 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.993790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.993992 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.49396175 +0000 UTC m=+160.237619199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994012 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:36 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:36 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:36 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994109 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994337 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994396 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994426 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.994506 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5mr\" (UniqueName: \"kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.995738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: I0219 08:23:36.995945 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:36 crc kubenswrapper[4780]: E0219 08:23:36.996050 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.496034024 +0000 UTC m=+160.239691473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.036536 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.037557 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.096843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.097416 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrpk\" (UniqueName: \"kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.097482 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.097508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.097666 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.597647743 +0000 UTC m=+160.341305192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.099495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5mr\" (UniqueName: \"kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr\") pod \"community-operators-pknb2\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.107417 4780 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6q46 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.107491 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6q46" podUID="d62c186f-40e9-48a9-9af3-2d44d3aa867f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.159196 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.189742 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" event={"ID":"edc65727-b71e-48c1-bdf4-a72d483b1ca5","Type":"ContainerStarted","Data":"950a4663fef2a022587d001abaee7d0d1334fe1ea7d6537bc409df21f963254d"} Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.200064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.200285 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrpk\" (UniqueName: \"kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.200417 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.200469 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.204910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.205092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.205826 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.705807685 +0000 UTC m=+160.449465314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.229839 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.242002 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.264745 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.273070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrpk\" (UniqueName: \"kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk\") pod \"certified-operators-6rw27\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.302729 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.303045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxvj\" (UniqueName: \"kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.303075 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.303118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.303262 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.803246486 +0000 UTC m=+160.546903935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.361580 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.371557 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j6q46" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.407265 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.407378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxvj\" (UniqueName: \"kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.407411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.407460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.407918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.408074 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:37.908059304 +0000 UTC m=+160.651716753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.408175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.425231 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.469087 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxvj\" (UniqueName: \"kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj\") pod \"community-operators-xfb9z\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.515151 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.515467 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.015450221 +0000 UTC m=+160.759107670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.616428 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.616801 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.116787372 +0000 UTC m=+160.860444821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.624157 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.720862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.721288 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.22125418 +0000 UTC m=+160.964911629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.769013 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.824330 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.824727 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.324714697 +0000 UTC m=+161.068372146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:37 crc kubenswrapper[4780]: I0219 08:23:37.933935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:37 crc kubenswrapper[4780]: E0219 08:23:37.934328 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.434307823 +0000 UTC m=+161.177965272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.035665 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.036097 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.536084948 +0000 UTC m=+161.279742387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.040605 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:38 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:38 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:38 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.040697 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.137810 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.138564 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.638548734 +0000 UTC m=+161.382206183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.171258 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:23:38 crc kubenswrapper[4780]: W0219 08:23:38.204367 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff2c7521_96f7_4727_b14b_537d7b9ead0d.slice/crio-2c186351b70c31ab23c389c109a9b0e8c66b73635182e05035919d3f2fdf424f WatchSource:0}: Error finding container 2c186351b70c31ab23c389c109a9b0e8c66b73635182e05035919d3f2fdf424f: Status 404 returned error can't find the container with id 2c186351b70c31ab23c389c109a9b0e8c66b73635182e05035919d3f2fdf424f Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.240540 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.240949 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.740935717 +0000 UTC m=+161.484593166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.282643 4780 generic.go:334] "Generic (PLEG): container finished" podID="8433aeaf-86c7-4f3a-b2c2-3e402450ee89" containerID="de3197054d9a29ecfa71e93917d676aa0c275566f0a451657a1a3a949a44dc66" exitCode=0 Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.282777 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" event={"ID":"8433aeaf-86c7-4f3a-b2c2-3e402450ee89","Type":"ContainerDied","Data":"de3197054d9a29ecfa71e93917d676aa0c275566f0a451657a1a3a949a44dc66"} Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.342560 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.342854 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.842805885 +0000 UTC m=+161.586463334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.342967 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.343378 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.843361622 +0000 UTC m=+161.587019071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.363666 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" event={"ID":"edc65727-b71e-48c1-bdf4-a72d483b1ca5","Type":"ContainerStarted","Data":"1a43bbd50e15a3de22aa4f580bf7e1f3c4b593aaecc48499041533a9e920c0b8"} Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.389556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerStarted","Data":"c6536cbdddfeb3b684afe186c5c36146db1f993993a4cfeddf9bb68c3b23b48b"} Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.389589 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerStarted","Data":"eb4b69e77d43f22092bdbc8a08288df110d0a0fc1fb3ef23eaeaefd557b0c4c3"} Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.398824 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.444399 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.445689 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:38.945643652 +0000 UTC m=+161.689301101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.446213 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.546107 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.549847 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.049820731 +0000 UTC m=+161.793478190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.647051 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.647494 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.147475478 +0000 UTC m=+161.891132927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.668715 4780 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.749608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.749926 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.249912523 +0000 UTC m=+161.993569972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.800172 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.801649 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.803773 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.819617 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.828138 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.854689 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.855195 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.355174326 +0000 UTC m=+162.098831775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.855691 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.856104 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.356094044 +0000 UTC m=+162.099751493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.957757 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.957956 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.45792724 +0000 UTC m=+162.201584689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.958114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.958158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.958188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn46g\" (UniqueName: \"kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.958389 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:38 crc kubenswrapper[4780]: E0219 08:23:38.958741 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.458725315 +0000 UTC m=+162.202382764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.995273 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:38 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:38 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:38 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:38 crc kubenswrapper[4780]: I0219 08:23:38.995341 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.059193 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.559176269 +0000 UTC m=+162.302833718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059313 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059362 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059379 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059408 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn46g\" (UniqueName: \"kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.059684 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.559674864 +0000 UTC m=+162.303332313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.059934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.060153 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.084592 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn46g\" (UniqueName: \"kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g\") pod \"redhat-marketplace-x5jxl\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.126536 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.160263 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.160510 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.660473509 +0000 UTC m=+162.404130958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.160626 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.161302 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.661288424 +0000 UTC m=+162.404945873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.203937 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dr9mt"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.206805 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.219555 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr9mt"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.262035 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.262790 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.762768559 +0000 UTC m=+162.506426008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.363914 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.364169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.364254 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcqz\" (UniqueName: \"kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.364287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.364311 4780 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T08:23:38.668743935Z","Handler":null,"Name":""} Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.364534 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:23:39.864519693 +0000 UTC m=+162.608177142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cv2g8" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.367670 4780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.367721 4780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.400787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" event={"ID":"edc65727-b71e-48c1-bdf4-a72d483b1ca5","Type":"ContainerStarted","Data":"77bf7e0a72faed07599e74e38525e1e85f331ed47228b7de66a9b55838807d2b"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.404032 4780 generic.go:334] "Generic (PLEG): container finished" podID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerID="c6536cbdddfeb3b684afe186c5c36146db1f993993a4cfeddf9bb68c3b23b48b" exitCode=0 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.404161 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerDied","Data":"c6536cbdddfeb3b684afe186c5c36146db1f993993a4cfeddf9bb68c3b23b48b"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.407704 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerID="f3af6dcd3d36693463e9ba769f710adfe5d9fbbe27cd0332cb1ed4393498540a" exitCode=0 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.407772 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerDied","Data":"f3af6dcd3d36693463e9ba769f710adfe5d9fbbe27cd0332cb1ed4393498540a"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.407851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerStarted","Data":"2c186351b70c31ab23c389c109a9b0e8c66b73635182e05035919d3f2fdf424f"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.413965 4780 generic.go:334] "Generic (PLEG): container finished" podID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerID="8b1d612784f534b113a1eb94b32d5d196d0a2b661ea08a3b1b8a0998a8ab2882" exitCode=0 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.414356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerDied","Data":"8b1d612784f534b113a1eb94b32d5d196d0a2b661ea08a3b1b8a0998a8ab2882"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.414421 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerStarted","Data":"4fbe63911374cdf370617959601722a2817828cc990287c30cd255092e728fee"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.417283 4780 generic.go:334] "Generic (PLEG): container finished" podID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerID="e199bcb73ef6d8527dabba3f3a8c48301a238469824cda46c29d749b722e42b7" exitCode=0 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.417564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerDied","Data":"e199bcb73ef6d8527dabba3f3a8c48301a238469824cda46c29d749b722e42b7"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.417652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerStarted","Data":"3e7413e4585cd7f87788234b3a4f9d9cb083d1be23c99f16cffc6a56bbfc3a42"} Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.442605 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.451849 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4frqf" podStartSLOduration=11.45181012 podStartE2EDuration="11.45181012s" podCreationTimestamp="2026-02-19 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:39.435639981 +0000 UTC m=+162.179297430" watchObservedRunningTime="2026-02-19 08:23:39.45181012 +0000 UTC m=+162.195467569" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.465822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.469666 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.469823 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkcqz\" (UniqueName: \"kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.469853 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.470589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.470808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.474482 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 08:23:39 crc kubenswrapper[4780]: W0219 08:23:39.475856 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e76db53_981b_4921_bfc0_8bb607700a4c.slice/crio-eb2b335c76e1ae4c58035e4a0aedc95d4a8645c61eec7db5ef46174d3cd91e13 WatchSource:0}: Error finding container eb2b335c76e1ae4c58035e4a0aedc95d4a8645c61eec7db5ef46174d3cd91e13: Status 404 returned error can't find the container with id eb2b335c76e1ae4c58035e4a0aedc95d4a8645c61eec7db5ef46174d3cd91e13 Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.499239 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkcqz\" (UniqueName: \"kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz\") pod \"redhat-marketplace-dr9mt\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.537164 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.571787 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.586414 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.586465 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.633910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cv2g8\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.722111 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.723077 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.728083 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.728558 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.746030 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.757297 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.796781 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:23:39 crc kubenswrapper[4780]: E0219 08:23:39.797013 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8433aeaf-86c7-4f3a-b2c2-3e402450ee89" containerName="collect-profiles" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.797026 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8433aeaf-86c7-4f3a-b2c2-3e402450ee89" containerName="collect-profiles" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.797156 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8433aeaf-86c7-4f3a-b2c2-3e402450ee89" containerName="collect-profiles" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.798240 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.803713 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.807379 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.832251 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.833321 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.838017 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.840579 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.841251 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.877210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume\") pod \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.877303 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cprlw\" (UniqueName: \"kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw\") pod \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.877356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume\") pod \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\" (UID: \"8433aeaf-86c7-4f3a-b2c2-3e402450ee89\") " Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.877659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.877939 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.878393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume" (OuterVolumeSpecName: "config-volume") pod "8433aeaf-86c7-4f3a-b2c2-3e402450ee89" (UID: "8433aeaf-86c7-4f3a-b2c2-3e402450ee89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.886133 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw" (OuterVolumeSpecName: "kube-api-access-cprlw") pod "8433aeaf-86c7-4f3a-b2c2-3e402450ee89" (UID: "8433aeaf-86c7-4f3a-b2c2-3e402450ee89"). InnerVolumeSpecName "kube-api-access-cprlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.886634 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8433aeaf-86c7-4f3a-b2c2-3e402450ee89" (UID: "8433aeaf-86c7-4f3a-b2c2-3e402450ee89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.886698 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.965238 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.979727 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.979888 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.979956 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.979996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980174 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwn4\" (UniqueName: \"kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980360 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980434 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cprlw\" (UniqueName: \"kubernetes.io/projected/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-kube-api-access-cprlw\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980471 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.980482 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8433aeaf-86c7-4f3a-b2c2-3e402450ee89-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.993829 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:39 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:39 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:39 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:39 crc kubenswrapper[4780]: I0219 08:23:39.993969 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.003901 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082357 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082423 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082491 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwn4\" (UniqueName: \"kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.082943 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.083356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.083895 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.083954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.101382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr9mt"] Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.108156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwn4\" (UniqueName: \"kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4\") pod \"redhat-operators-kbfwg\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.115033 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.116668 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:23:40 crc kubenswrapper[4780]: W0219 08:23:40.120469 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1b6e16_c5ef_4858_af98_cf370809d4c8.slice/crio-0bd76a525892fb9a739de7cf1ccf3118217c04ef6bf0f5e2bcfd4be03899c48f WatchSource:0}: Error finding container 0bd76a525892fb9a739de7cf1ccf3118217c04ef6bf0f5e2bcfd4be03899c48f: Status 404 returned error can't find the container with id 0bd76a525892fb9a739de7cf1ccf3118217c04ef6bf0f5e2bcfd4be03899c48f Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.152290 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:23:40 crc kubenswrapper[4780]: W0219 08:23:40.162327 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80cbb07a_c89f_46cb_b9ef_1dfdc4dc167f.slice/crio-491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d WatchSource:0}: Error finding container 491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d: Status 404 returned error can't find the container with id 491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.185588 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.206298 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.208293 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.211384 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.285373 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.285778 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rzb\" (UniqueName: \"kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.285816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.295383 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.295748 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.299615 4780 patch_prober.go:28] interesting pod/console-f9d7485db-mzcjh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.299746 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mzcjh" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.387349 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.387430 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.387551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rzb\" (UniqueName: \"kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.388564 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.388649 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.407205 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rzb\" (UniqueName: \"kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb\") pod \"redhat-operators-thv5l\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.686441 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.687585 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.687617 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.687701 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.687726 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.708446 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.734170 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" event={"ID":"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f","Type":"ContainerStarted","Data":"491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.753106 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerStarted","Data":"6708b7ead2cd0fff7c562845e98bc7b54bb63839d5f1cfbcfde78b51799267df"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.753183 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerStarted","Data":"0bd76a525892fb9a739de7cf1ccf3118217c04ef6bf0f5e2bcfd4be03899c48f"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.753552 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.790106 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.790153 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg" event={"ID":"8433aeaf-86c7-4f3a-b2c2-3e402450ee89","Type":"ContainerDied","Data":"a33720ff5cdad951466e511202580712fab9b652ad40aba36ed386c52e2919cb"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.790652 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a33720ff5cdad951466e511202580712fab9b652ad40aba36ed386c52e2919cb" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.812926 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerID="330dfbb0264691393baca75d382991a28e298ac310ea079973f51697bcda76f5" exitCode=0 Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.814808 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerDied","Data":"330dfbb0264691393baca75d382991a28e298ac310ea079973f51697bcda76f5"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.814872 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerStarted","Data":"eb2b335c76e1ae4c58035e4a0aedc95d4a8645c61eec7db5ef46174d3cd91e13"} Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.874867 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5t7th" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.946417 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.946458 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.978362 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.990280 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.998447 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:40 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:40 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:40 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:40 crc kubenswrapper[4780]: I0219 08:23:40.998514 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.082592 4780 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xrrt4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]log ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]etcd ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/max-in-flight-filter ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 08:23:41 crc kubenswrapper[4780]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-startinformers ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 08:23:41 crc kubenswrapper[4780]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 08:23:41 crc kubenswrapper[4780]: livez check failed Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.082675 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" podUID="79d7a2e4-9a84-4f25-aa19-a07b107cfd4c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.291896 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.443383 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.461835 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.864191 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72d5cd72-c75e-4676-999d-c76d02269554","Type":"ContainerStarted","Data":"f3a35095e2c72208cb8cc52cc56ca123d1c587932df651e476a993b858e9a2be"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.864586 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72d5cd72-c75e-4676-999d-c76d02269554","Type":"ContainerStarted","Data":"f198c308da00382ea745d74b88f3832677751ecc2fb0abfc82e1209f6c967cc5"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.865830 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.868392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" event={"ID":"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f","Type":"ContainerStarted","Data":"6a39451dc49b8dc13e0111b61ba2bcc3004ef2bbada475ade3ed025fad125151"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.868475 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.895203 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerID="6708b7ead2cd0fff7c562845e98bc7b54bb63839d5f1cfbcfde78b51799267df" exitCode=0 Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.895301 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerDied","Data":"6708b7ead2cd0fff7c562845e98bc7b54bb63839d5f1cfbcfde78b51799267df"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.897797 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.897741993 podStartE2EDuration="2.897741993s" podCreationTimestamp="2026-02-19 08:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:41.881237923 +0000 UTC m=+164.624895382" watchObservedRunningTime="2026-02-19 08:23:41.897741993 +0000 UTC m=+164.641399442" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.906642 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" podStartSLOduration=136.906613447 podStartE2EDuration="2m16.906613447s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:41.901571231 +0000 UTC m=+164.645228680" watchObservedRunningTime="2026-02-19 08:23:41.906613447 +0000 UTC m=+164.650270896" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.983510 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"730efa7f-7941-4a25-9230-d8c5499b1588","Type":"ContainerStarted","Data":"3e8adfb40b5b977ee6aac2c4c635e668d6541a4e796bf130602a4a108bdc4081"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.983611 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s6sx8" Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.983624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerStarted","Data":"42d9bfbfc42b9e97673368c0c390f2fb8fb76edd64bc30fbe1cedf34a89daae8"} Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.994177 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:41 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:41 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:41 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:41 crc kubenswrapper[4780]: I0219 08:23:41.994254 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.342717 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.977771 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"730efa7f-7941-4a25-9230-d8c5499b1588","Type":"ContainerStarted","Data":"f5001fd2ca9a3a120a70e799492129cab243a4db82ea6fbadf8f4aa6396a6085"} Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.980961 4780 generic.go:334] "Generic (PLEG): container finished" podID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerID="570c85979494d11e33cc2a8fb4961012f3eadef6a2559282f24fb7dc4157c0f2" exitCode=0 Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.981231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerDied","Data":"570c85979494d11e33cc2a8fb4961012f3eadef6a2559282f24fb7dc4157c0f2"} Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.990475 4780 generic.go:334] "Generic (PLEG): container finished" podID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerID="fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e" exitCode=0 Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.990751 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerDied","Data":"fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e"} Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.990829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerStarted","Data":"f6154e558e4845e998473759511283701cf79075e61df12032736ac06530bdbc"} Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.993085 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.993058025 podStartE2EDuration="3.993058025s" podCreationTimestamp="2026-02-19 08:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:42.99191266 +0000 UTC m=+165.735570109" watchObservedRunningTime="2026-02-19 08:23:42.993058025 +0000 UTC m=+165.736715474" Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.994616 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:42 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:42 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:42 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:42 crc kubenswrapper[4780]: I0219 08:23:42.994679 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:43 crc kubenswrapper[4780]: I0219 08:23:43.194701 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8hmxg" Feb 19 08:23:43 crc kubenswrapper[4780]: I0219 08:23:43.993648 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:43 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:43 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:43 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:43 crc kubenswrapper[4780]: I0219 08:23:43.994052 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.032734 4780 generic.go:334] "Generic (PLEG): container finished" podID="72d5cd72-c75e-4676-999d-c76d02269554" containerID="f3a35095e2c72208cb8cc52cc56ca123d1c587932df651e476a993b858e9a2be" exitCode=0 Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.032842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72d5cd72-c75e-4676-999d-c76d02269554","Type":"ContainerDied","Data":"f3a35095e2c72208cb8cc52cc56ca123d1c587932df651e476a993b858e9a2be"} Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.040546 4780 generic.go:334] "Generic (PLEG): container finished" podID="730efa7f-7941-4a25-9230-d8c5499b1588" containerID="f5001fd2ca9a3a120a70e799492129cab243a4db82ea6fbadf8f4aa6396a6085" exitCode=0 Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.040612 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"730efa7f-7941-4a25-9230-d8c5499b1588","Type":"ContainerDied","Data":"f5001fd2ca9a3a120a70e799492129cab243a4db82ea6fbadf8f4aa6396a6085"} Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.992533 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:44 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:44 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:44 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:44 crc kubenswrapper[4780]: I0219 08:23:44.992621 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:45 crc kubenswrapper[4780]: I0219 08:23:45.993960 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:45 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:45 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:45 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:45 crc kubenswrapper[4780]: I0219 08:23:45.994396 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:46 crc kubenswrapper[4780]: I0219 08:23:46.071263 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:46 crc kubenswrapper[4780]: I0219 08:23:46.080805 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xrrt4" Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.002404 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:47 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:47 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:47 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.002489 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.969377 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.978038 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1002d5b-b8b1-4175-9e36-9fbea7a1c060-metrics-certs\") pod \"network-metrics-daemon-jg765\" (UID: \"d1002d5b-b8b1-4175-9e36-9fbea7a1c060\") " pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.993851 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:47 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:47 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:47 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:47 crc kubenswrapper[4780]: I0219 08:23:47.994480 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:48 crc kubenswrapper[4780]: I0219 08:23:48.260430 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jg765" Feb 19 08:23:48 crc kubenswrapper[4780]: I0219 08:23:48.994349 4780 patch_prober.go:28] interesting pod/router-default-5444994796-g6dcx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:23:48 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Feb 19 08:23:48 crc kubenswrapper[4780]: [+]process-running ok Feb 19 08:23:48 crc kubenswrapper[4780]: healthz check failed Feb 19 08:23:48 crc kubenswrapper[4780]: I0219 08:23:48.994422 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g6dcx" podUID="b57f85fe-b6a8-4e85-902e-bc8227fac331" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:23:49 crc kubenswrapper[4780]: I0219 08:23:49.993871 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:49 crc kubenswrapper[4780]: I0219 08:23:49.997411 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-g6dcx" Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.296316 4780 patch_prober.go:28] interesting pod/console-f9d7485db-mzcjh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.296477 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mzcjh" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.558155 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.558219 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-p664d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.558233 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:50 crc kubenswrapper[4780]: I0219 08:23:50.558286 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p664d" podUID="8431537e-f827-40a6-8be5-836d4b203c22" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.308906 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.382514 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir\") pod \"730efa7f-7941-4a25-9230-d8c5499b1588\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.382685 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "730efa7f-7941-4a25-9230-d8c5499b1588" (UID: "730efa7f-7941-4a25-9230-d8c5499b1588"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.382949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access\") pod \"730efa7f-7941-4a25-9230-d8c5499b1588\" (UID: \"730efa7f-7941-4a25-9230-d8c5499b1588\") " Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.383337 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/730efa7f-7941-4a25-9230-d8c5499b1588-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.391463 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "730efa7f-7941-4a25-9230-d8c5499b1588" (UID: "730efa7f-7941-4a25-9230-d8c5499b1588"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[4780]: I0219 08:23:54.484203 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730efa7f-7941-4a25-9230-d8c5499b1588-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.188087 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"730efa7f-7941-4a25-9230-d8c5499b1588","Type":"ContainerDied","Data":"3e8adfb40b5b977ee6aac2c4c635e668d6541a4e796bf130602a4a108bdc4081"} Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.188665 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8adfb40b5b977ee6aac2c4c635e668d6541a4e796bf130602a4a108bdc4081" Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.188245 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.843621 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.844004 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" containerID="cri-o://c061adfac06f7da6479b469c106a642756825b4504c7f52afd6b085ddeec5bfc" gracePeriod=30 Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.869488 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:23:55 crc kubenswrapper[4780]: I0219 08:23:55.869805 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" containerID="cri-o://e9904e9a3a211925fcaec7f31875c1c65ca8b46cc9b45b1a42c4e305c39ce3c0" gracePeriod=30 Feb 19 08:23:57 crc kubenswrapper[4780]: I0219 08:23:57.106706 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:23:57 crc kubenswrapper[4780]: I0219 08:23:57.201887 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerID="e9904e9a3a211925fcaec7f31875c1c65ca8b46cc9b45b1a42c4e305c39ce3c0" exitCode=0 Feb 19 08:23:57 crc kubenswrapper[4780]: I0219 08:23:57.201951 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" event={"ID":"6d24231e-ad6b-496d-b3ff-da7dd94d12fc","Type":"ContainerDied","Data":"e9904e9a3a211925fcaec7f31875c1c65ca8b46cc9b45b1a42c4e305c39ce3c0"} Feb 19 08:23:57 crc kubenswrapper[4780]: I0219 08:23:57.203567 4780 generic.go:334] "Generic (PLEG): container finished" podID="8ce275f1-b63d-4597-8680-e96315dded0c" containerID="c061adfac06f7da6479b469c106a642756825b4504c7f52afd6b085ddeec5bfc" exitCode=0 Feb 19 08:23:57 crc kubenswrapper[4780]: I0219 08:23:57.203595 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" event={"ID":"8ce275f1-b63d-4597-8680-e96315dded0c","Type":"ContainerDied","Data":"c061adfac06f7da6479b469c106a642756825b4504c7f52afd6b085ddeec5bfc"} Feb 19 08:23:59 crc kubenswrapper[4780]: I0219 08:23:59.894283 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:24:00 crc kubenswrapper[4780]: I0219 08:24:00.303597 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:24:00 crc kubenswrapper[4780]: I0219 08:24:00.311326 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:24:00 crc kubenswrapper[4780]: I0219 08:24:00.404904 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kswgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 19 08:24:00 crc kubenswrapper[4780]: I0219 08:24:00.405593 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 19 08:24:00 crc kubenswrapper[4780]: I0219 08:24:00.565987 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-p664d" Feb 19 08:24:01 crc kubenswrapper[4780]: I0219 08:24:01.929814 4780 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bqpdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:24:01 crc kubenswrapper[4780]: I0219 08:24:01.929957 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.406308 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.412361 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.457835 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:02 crc kubenswrapper[4780]: E0219 08:24:02.458187 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730efa7f-7941-4a25-9230-d8c5499b1588" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458206 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="730efa7f-7941-4a25-9230-d8c5499b1588" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: E0219 08:24:02.458214 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d5cd72-c75e-4676-999d-c76d02269554" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458221 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d5cd72-c75e-4676-999d-c76d02269554" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: E0219 08:24:02.458242 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458249 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458351 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" containerName="route-controller-manager" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458366 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d5cd72-c75e-4676-999d-c76d02269554" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458377 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="730efa7f-7941-4a25-9230-d8c5499b1588" containerName="pruner" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.458881 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.470330 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmmfr\" (UniqueName: \"kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr\") pod \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir\") pod \"72d5cd72-c75e-4676-999d-c76d02269554\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519802 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert\") pod \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519832 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access\") pod \"72d5cd72-c75e-4676-999d-c76d02269554\" (UID: \"72d5cd72-c75e-4676-999d-c76d02269554\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519956 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca\") pod \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.519979 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config\") pod \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\" (UID: \"6d24231e-ad6b-496d-b3ff-da7dd94d12fc\") " Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.521212 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72d5cd72-c75e-4676-999d-c76d02269554" (UID: "72d5cd72-c75e-4676-999d-c76d02269554"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.521717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config" (OuterVolumeSpecName: "config") pod "6d24231e-ad6b-496d-b3ff-da7dd94d12fc" (UID: "6d24231e-ad6b-496d-b3ff-da7dd94d12fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.522186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "6d24231e-ad6b-496d-b3ff-da7dd94d12fc" (UID: "6d24231e-ad6b-496d-b3ff-da7dd94d12fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.530485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72d5cd72-c75e-4676-999d-c76d02269554" (UID: "72d5cd72-c75e-4676-999d-c76d02269554"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.531350 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr" (OuterVolumeSpecName: "kube-api-access-jmmfr") pod "6d24231e-ad6b-496d-b3ff-da7dd94d12fc" (UID: "6d24231e-ad6b-496d-b3ff-da7dd94d12fc"). InnerVolumeSpecName "kube-api-access-jmmfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.532659 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6d24231e-ad6b-496d-b3ff-da7dd94d12fc" (UID: "6d24231e-ad6b-496d-b3ff-da7dd94d12fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.621820 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.621907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.621936 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47dbp\" (UniqueName: \"kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.621981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622481 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72d5cd72-c75e-4676-999d-c76d02269554-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622550 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622570 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72d5cd72-c75e-4676-999d-c76d02269554-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622596 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622617 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.622641 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmmfr\" (UniqueName: \"kubernetes.io/projected/6d24231e-ad6b-496d-b3ff-da7dd94d12fc-kube-api-access-jmmfr\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.724740 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.724893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.725012 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.725061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47dbp\" (UniqueName: \"kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.728156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.731832 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.756567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47dbp\" (UniqueName: \"kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:02 crc kubenswrapper[4780]: I0219 08:24:02.817062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca\") pod \"route-controller-manager-67b7ddd96d-xrxkq\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.077767 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.258351 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.258699 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd" event={"ID":"6d24231e-ad6b-496d-b3ff-da7dd94d12fc","Type":"ContainerDied","Data":"4487366c1b2e4abf8c7da37366967491939c247d7b5f4500c28a8d8d2b25bc8a"} Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.259033 4780 scope.go:117] "RemoveContainer" containerID="e9904e9a3a211925fcaec7f31875c1c65ca8b46cc9b45b1a42c4e305c39ce3c0" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.264940 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72d5cd72-c75e-4676-999d-c76d02269554","Type":"ContainerDied","Data":"f198c308da00382ea745d74b88f3832677751ecc2fb0abfc82e1209f6c967cc5"} Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.265014 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.265048 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f198c308da00382ea745d74b88f3832677751ecc2fb0abfc82e1209f6c967cc5" Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.332926 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.339442 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bqpdd"] Feb 19 08:24:03 crc kubenswrapper[4780]: I0219 08:24:03.951087 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d24231e-ad6b-496d-b3ff-da7dd94d12fc" path="/var/lib/kubelet/pods/6d24231e-ad6b-496d-b3ff-da7dd94d12fc/volumes" Feb 19 08:24:06 crc kubenswrapper[4780]: I0219 08:24:06.336488 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:24:06 crc kubenswrapper[4780]: I0219 08:24:06.337062 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:24:10 crc kubenswrapper[4780]: E0219 08:24:10.703297 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 08:24:10 crc kubenswrapper[4780]: E0219 08:24:10.703957 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dg5mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pknb2_openshift-marketplace(4cf0cb94-4d97-4c5c-bfbc-272acbda2b95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:10 crc kubenswrapper[4780]: E0219 08:24:10.706378 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pknb2" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" Feb 19 08:24:11 crc kubenswrapper[4780]: I0219 08:24:11.183587 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hbk8s" Feb 19 08:24:11 crc kubenswrapper[4780]: I0219 08:24:11.405434 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kswgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:24:11 crc kubenswrapper[4780]: I0219 08:24:11.405522 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:24:15 crc kubenswrapper[4780]: E0219 08:24:15.168024 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pknb2" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" Feb 19 08:24:15 crc kubenswrapper[4780]: I0219 08:24:15.907467 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:17 crc kubenswrapper[4780]: E0219 08:24:17.090955 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 08:24:17 crc kubenswrapper[4780]: E0219 08:24:17.091292 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7rzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-thv5l_openshift-marketplace(da9e2878-8ae0-4de1-89a9-f0c55ff84d18): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:17 crc kubenswrapper[4780]: E0219 08:24:17.092465 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-thv5l" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" Feb 19 08:24:17 crc kubenswrapper[4780]: I0219 08:24:17.928322 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:24:17 crc kubenswrapper[4780]: I0219 08:24:17.930056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:17 crc kubenswrapper[4780]: I0219 08:24:17.936959 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 08:24:17 crc kubenswrapper[4780]: I0219 08:24:17.936965 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 08:24:17 crc kubenswrapper[4780]: I0219 08:24:17.947541 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.000032 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.000256 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.101176 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.101269 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.101322 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.123569 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:18 crc kubenswrapper[4780]: I0219 08:24:18.259449 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:19 crc kubenswrapper[4780]: E0219 08:24:19.509160 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-thv5l" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" Feb 19 08:24:21 crc kubenswrapper[4780]: I0219 08:24:21.404857 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kswgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:24:21 crc kubenswrapper[4780]: I0219 08:24:21.404955 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.332460 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.333813 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.337248 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.387261 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.387375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.387450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.488707 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.488791 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.488822 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.488970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.489024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.521248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access\") pod \"installer-9-crc\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:23 crc kubenswrapper[4780]: I0219 08:24:23.676244 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:24:24 crc kubenswrapper[4780]: E0219 08:24:24.298305 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 08:24:24 crc kubenswrapper[4780]: E0219 08:24:24.298533 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rn46g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x5jxl_openshift-marketplace(2e76db53-981b-4921-bfc0-8bb607700a4c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:24 crc kubenswrapper[4780]: E0219 08:24:24.299921 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x5jxl" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" Feb 19 08:24:25 crc kubenswrapper[4780]: E0219 08:24:25.275198 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x5jxl" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.362346 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.461725 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:25 crc kubenswrapper[4780]: E0219 08:24:25.462067 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.462086 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.464171 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" containerName="controller-manager" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.464767 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.472298 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516241 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca\") pod \"8ce275f1-b63d-4597-8680-e96315dded0c\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516309 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cgwp\" (UniqueName: \"kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp\") pod \"8ce275f1-b63d-4597-8680-e96315dded0c\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516339 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config\") pod \"8ce275f1-b63d-4597-8680-e96315dded0c\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles\") pod \"8ce275f1-b63d-4597-8680-e96315dded0c\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516434 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert\") pod \"8ce275f1-b63d-4597-8680-e96315dded0c\" (UID: \"8ce275f1-b63d-4597-8680-e96315dded0c\") " Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.516997 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.517045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.517073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.517182 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.517215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95nf\" (UniqueName: \"kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.518372 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8ce275f1-b63d-4597-8680-e96315dded0c" (UID: "8ce275f1-b63d-4597-8680-e96315dded0c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.518430 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config" (OuterVolumeSpecName: "config") pod "8ce275f1-b63d-4597-8680-e96315dded0c" (UID: "8ce275f1-b63d-4597-8680-e96315dded0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.518735 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ce275f1-b63d-4597-8680-e96315dded0c" (UID: "8ce275f1-b63d-4597-8680-e96315dded0c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.523605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ce275f1-b63d-4597-8680-e96315dded0c" (UID: "8ce275f1-b63d-4597-8680-e96315dded0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.540157 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp" (OuterVolumeSpecName: "kube-api-access-4cgwp") pod "8ce275f1-b63d-4597-8680-e96315dded0c" (UID: "8ce275f1-b63d-4597-8680-e96315dded0c"). InnerVolumeSpecName "kube-api-access-4cgwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.550737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" event={"ID":"8ce275f1-b63d-4597-8680-e96315dded0c","Type":"ContainerDied","Data":"519972d9e3fa54b475739f4783efd2399bee735929f99fd4559f568cdff8e0d2"} Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.550820 4780 scope.go:117] "RemoveContainer" containerID="c061adfac06f7da6479b469c106a642756825b4504c7f52afd6b085ddeec5bfc" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.550951 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kswgq" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.617092 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.618641 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.618700 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95nf\" (UniqueName: \"kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.618735 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.618799 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.618816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.619274 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kswgq"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.619377 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.620728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.621174 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cgwp\" (UniqueName: \"kubernetes.io/projected/8ce275f1-b63d-4597-8680-e96315dded0c-kube-api-access-4cgwp\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.621203 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.621215 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ce275f1-b63d-4597-8680-e96315dded0c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.621248 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce275f1-b63d-4597-8680-e96315dded0c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.621628 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.629034 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.629821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.635337 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95nf\" (UniqueName: \"kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf\") pod \"controller-manager-7f9d76974-82gcd\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: E0219 08:24:25.765607 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 08:24:25 crc kubenswrapper[4780]: E0219 08:24:25.765845 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gq4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qhjsh_openshift-marketplace(4da72063-3969-4d56-b11e-ab1fbbef5b3b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:25 crc kubenswrapper[4780]: E0219 08:24:25.767049 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qhjsh" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.813675 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.932757 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jg765"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.947209 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce275f1-b63d-4597-8680-e96315dded0c" path="/var/lib/kubelet/pods/8ce275f1-b63d-4597-8680-e96315dded0c/volumes" Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.977367 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.989217 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:25 crc kubenswrapper[4780]: I0219 08:24:25.992003 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.067029 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:26 crc kubenswrapper[4780]: W0219 08:24:26.127881 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod693579ef_7bf3_47d2_85f3_493aea67e524.slice/crio-d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea WatchSource:0}: Error finding container d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea: Status 404 returned error can't find the container with id d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea Feb 19 08:24:26 crc kubenswrapper[4780]: W0219 08:24:26.129899 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602c50a6_ac7d_4df2_b4cb_78cf32e894e8.slice/crio-ca4a266d758a80a7c05a17e4fce3789d4fadc597e5d4134d0cf468b8fbd17130 WatchSource:0}: Error finding container ca4a266d758a80a7c05a17e4fce3789d4fadc597e5d4134d0cf468b8fbd17130: Status 404 returned error can't find the container with id ca4a266d758a80a7c05a17e4fce3789d4fadc597e5d4134d0cf468b8fbd17130 Feb 19 08:24:26 crc kubenswrapper[4780]: W0219 08:24:26.132014 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9247aadc_86ba_41f6_a36e_d0243cd52728.slice/crio-519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332 WatchSource:0}: Error finding container 519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332: Status 404 returned error can't find the container with id 519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332 Feb 19 08:24:26 crc kubenswrapper[4780]: W0219 08:24:26.135297 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod968a1eff_6d04_44fa_95e0_3d6f038e7750.slice/crio-de82e09e70f7250aa10e25a8e81b0b3e02cc53712f34652494a8b7f8844ba3f6 WatchSource:0}: Error finding container de82e09e70f7250aa10e25a8e81b0b3e02cc53712f34652494a8b7f8844ba3f6: Status 404 returned error can't find the container with id de82e09e70f7250aa10e25a8e81b0b3e02cc53712f34652494a8b7f8844ba3f6 Feb 19 08:24:26 crc kubenswrapper[4780]: E0219 08:24:26.358993 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 08:24:26 crc kubenswrapper[4780]: E0219 08:24:26.359572 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xfb9z_openshift-marketplace(2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:26 crc kubenswrapper[4780]: E0219 08:24:26.360744 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xfb9z" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.566629 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" event={"ID":"602c50a6-ac7d-4df2-b4cb-78cf32e894e8","Type":"ContainerStarted","Data":"ca4a266d758a80a7c05a17e4fce3789d4fadc597e5d4134d0cf468b8fbd17130"} Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.571716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"693579ef-7bf3-47d2-85f3-493aea67e524","Type":"ContainerStarted","Data":"d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea"} Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.573779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9247aadc-86ba-41f6-a36e-d0243cd52728","Type":"ContainerStarted","Data":"519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332"} Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.576262 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jg765" event={"ID":"d1002d5b-b8b1-4175-9e36-9fbea7a1c060","Type":"ContainerStarted","Data":"251c185d75adbaa3014737e0c6f4c2d8ba5e7ad2751b5dc89c5eb579bf33a094"} Feb 19 08:24:26 crc kubenswrapper[4780]: I0219 08:24:26.578555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" event={"ID":"968a1eff-6d04-44fa-95e0-3d6f038e7750","Type":"ContainerStarted","Data":"de82e09e70f7250aa10e25a8e81b0b3e02cc53712f34652494a8b7f8844ba3f6"} Feb 19 08:24:26 crc kubenswrapper[4780]: E0219 08:24:26.582329 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xfb9z" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" Feb 19 08:24:26 crc kubenswrapper[4780]: E0219 08:24:26.582331 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qhjsh" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.368074 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.368813 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkcqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dr9mt_openshift-marketplace(4c1b6e16-c5ef-4858-af98-cf370809d4c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.370115 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dr9mt" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.590248 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" event={"ID":"602c50a6-ac7d-4df2-b4cb-78cf32e894e8","Type":"ContainerStarted","Data":"e1eb1151263f824bcb7c8cc9f58afffe6bb1116f1fc640330e485c9e2784d806"} Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.590430 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerName="route-controller-manager" containerID="cri-o://e1eb1151263f824bcb7c8cc9f58afffe6bb1116f1fc640330e485c9e2784d806" gracePeriod=30 Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.591798 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.595674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"693579ef-7bf3-47d2-85f3-493aea67e524","Type":"ContainerStarted","Data":"3175efa05312537a65679520b463365fdf6d2004fab12fd4e108fe722f5a4b64"} Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.598491 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9247aadc-86ba-41f6-a36e-d0243cd52728","Type":"ContainerStarted","Data":"88c2d5012b1e751f8775708f3c12dc6745d42c979e82da09b42ae27843bf12ad"} Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.600815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jg765" event={"ID":"d1002d5b-b8b1-4175-9e36-9fbea7a1c060","Type":"ContainerStarted","Data":"5c302b9fc381d809909489490aef1b53b58fd6ff36d9904722b6220803a04006"} Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.606148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" event={"ID":"968a1eff-6d04-44fa-95e0-3d6f038e7750","Type":"ContainerStarted","Data":"b0f470cc86ddc6d33ab043c14bf86a1d269d5277041cd18c56c6a47bc10d08e2"} Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.606488 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.608880 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dr9mt" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.611395 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" podStartSLOduration=32.611376729 podStartE2EDuration="32.611376729s" podCreationTimestamp="2026-02-19 08:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:27.607396557 +0000 UTC m=+210.351054006" watchObservedRunningTime="2026-02-19 08:24:27.611376729 +0000 UTC m=+210.355034198" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.618164 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.651383 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" podStartSLOduration=12.651353584 podStartE2EDuration="12.651353584s" podCreationTimestamp="2026-02-19 08:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:27.624996912 +0000 UTC m=+210.368654371" watchObservedRunningTime="2026-02-19 08:24:27.651353584 +0000 UTC m=+210.395011053" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.660608 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.660862 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pcwn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kbfwg_openshift-marketplace(f67dbc25-bb01-4883-b25e-c34d66a3b4fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.662104 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kbfwg" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.665965 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.665946744 podStartE2EDuration="4.665946744s" podCreationTimestamp="2026-02-19 08:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:27.663619779 +0000 UTC m=+210.407277238" watchObservedRunningTime="2026-02-19 08:24:27.665946744 +0000 UTC m=+210.409604203" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.676924 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.677097 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thrpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6rw27_openshift-marketplace(ff2c7521-96f7-4727-b14b-537d7b9ead0d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:24:27 crc kubenswrapper[4780]: E0219 08:24:27.678344 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6rw27" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.687042 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.687020057 podStartE2EDuration="10.687020057s" podCreationTimestamp="2026-02-19 08:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:27.682224562 +0000 UTC m=+210.425882011" watchObservedRunningTime="2026-02-19 08:24:27.687020057 +0000 UTC m=+210.430677516" Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.731504 4780 patch_prober.go:28] interesting pod/route-controller-manager-67b7ddd96d-xrxkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:42724->10.217.0.54:8443: read: connection reset by peer" start-of-body= Feb 19 08:24:27 crc kubenswrapper[4780]: I0219 08:24:27.731581 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:42724->10.217.0.54:8443: read: connection reset by peer" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.625985 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jg765" event={"ID":"d1002d5b-b8b1-4175-9e36-9fbea7a1c060","Type":"ContainerStarted","Data":"b2cae2e2f0266d9281e1cd3a300470d3be74404468f0c80e3d4a69681559b403"} Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.630214 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-67b7ddd96d-xrxkq_602c50a6-ac7d-4df2-b4cb-78cf32e894e8/route-controller-manager/0.log" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.630266 4780 generic.go:334] "Generic (PLEG): container finished" podID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerID="e1eb1151263f824bcb7c8cc9f58afffe6bb1116f1fc640330e485c9e2784d806" exitCode=255 Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.630491 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" event={"ID":"602c50a6-ac7d-4df2-b4cb-78cf32e894e8","Type":"ContainerDied","Data":"e1eb1151263f824bcb7c8cc9f58afffe6bb1116f1fc640330e485c9e2784d806"} Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.634520 4780 generic.go:334] "Generic (PLEG): container finished" podID="693579ef-7bf3-47d2-85f3-493aea67e524" containerID="3175efa05312537a65679520b463365fdf6d2004fab12fd4e108fe722f5a4b64" exitCode=0 Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.635793 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"693579ef-7bf3-47d2-85f3-493aea67e524","Type":"ContainerDied","Data":"3175efa05312537a65679520b463365fdf6d2004fab12fd4e108fe722f5a4b64"} Feb 19 08:24:28 crc kubenswrapper[4780]: E0219 08:24:28.638526 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kbfwg" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" Feb 19 08:24:28 crc kubenswrapper[4780]: E0219 08:24:28.638968 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6rw27" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.647297 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jg765" podStartSLOduration=183.64727405 podStartE2EDuration="3m3.64727405s" podCreationTimestamp="2026-02-19 08:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:28.643763381 +0000 UTC m=+211.387420830" watchObservedRunningTime="2026-02-19 08:24:28.64727405 +0000 UTC m=+211.390931499" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.741967 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-67b7ddd96d-xrxkq_602c50a6-ac7d-4df2-b4cb-78cf32e894e8/route-controller-manager/0.log" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.742042 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.773317 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:28 crc kubenswrapper[4780]: E0219 08:24:28.773610 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerName="route-controller-manager" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.773632 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerName="route-controller-manager" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.773800 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" containerName="route-controller-manager" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.774334 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.795226 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.890752 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47dbp\" (UniqueName: \"kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp\") pod \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.890878 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config\") pod \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.890928 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca\") pod \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.890949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert\") pod \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\" (UID: \"602c50a6-ac7d-4df2-b4cb-78cf32e894e8\") " Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.891193 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.891233 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.891342 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g4qv\" (UniqueName: \"kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.891386 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.892184 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "602c50a6-ac7d-4df2-b4cb-78cf32e894e8" (UID: "602c50a6-ac7d-4df2-b4cb-78cf32e894e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.892267 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config" (OuterVolumeSpecName: "config") pod "602c50a6-ac7d-4df2-b4cb-78cf32e894e8" (UID: "602c50a6-ac7d-4df2-b4cb-78cf32e894e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.896032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "602c50a6-ac7d-4df2-b4cb-78cf32e894e8" (UID: "602c50a6-ac7d-4df2-b4cb-78cf32e894e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.897214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp" (OuterVolumeSpecName: "kube-api-access-47dbp") pod "602c50a6-ac7d-4df2-b4cb-78cf32e894e8" (UID: "602c50a6-ac7d-4df2-b4cb-78cf32e894e8"). InnerVolumeSpecName "kube-api-access-47dbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992561 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g4qv\" (UniqueName: \"kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992629 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992669 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992796 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992810 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992824 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.992836 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47dbp\" (UniqueName: \"kubernetes.io/projected/602c50a6-ac7d-4df2-b4cb-78cf32e894e8-kube-api-access-47dbp\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.993594 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.993970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:28 crc kubenswrapper[4780]: I0219 08:24:28.996789 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.009831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g4qv\" (UniqueName: \"kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv\") pod \"route-controller-manager-5f77f744c8-2s4wh\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.089516 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.296116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:29 crc kubenswrapper[4780]: W0219 08:24:29.302723 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e524d8_47da_4797_ad52_8f28db57ff7a.slice/crio-ca42c20ac742a4988456e538404b08d1494b3f209612b6eee13bc7dbb09c0d17 WatchSource:0}: Error finding container ca42c20ac742a4988456e538404b08d1494b3f209612b6eee13bc7dbb09c0d17: Status 404 returned error can't find the container with id ca42c20ac742a4988456e538404b08d1494b3f209612b6eee13bc7dbb09c0d17 Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.642557 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-67b7ddd96d-xrxkq_602c50a6-ac7d-4df2-b4cb-78cf32e894e8/route-controller-manager/0.log" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.643043 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.643011 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq" event={"ID":"602c50a6-ac7d-4df2-b4cb-78cf32e894e8","Type":"ContainerDied","Data":"ca4a266d758a80a7c05a17e4fce3789d4fadc597e5d4134d0cf468b8fbd17130"} Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.643203 4780 scope.go:117] "RemoveContainer" containerID="e1eb1151263f824bcb7c8cc9f58afffe6bb1116f1fc640330e485c9e2784d806" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.645049 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" event={"ID":"00e524d8-47da-4797-ad52-8f28db57ff7a","Type":"ContainerStarted","Data":"b805ec8d1ced1deb83430aa8c10f41f04a3b32b77e91a7b9d6309f92d49698cc"} Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.645107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" event={"ID":"00e524d8-47da-4797-ad52-8f28db57ff7a","Type":"ContainerStarted","Data":"ca42c20ac742a4988456e538404b08d1494b3f209612b6eee13bc7dbb09c0d17"} Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.676426 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" podStartSLOduration=14.676400501 podStartE2EDuration="14.676400501s" podCreationTimestamp="2026-02-19 08:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:29.669744703 +0000 UTC m=+212.413402172" watchObservedRunningTime="2026-02-19 08:24:29.676400501 +0000 UTC m=+212.420057960" Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.688484 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.692490 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67b7ddd96d-xrxkq"] Feb 19 08:24:29 crc kubenswrapper[4780]: I0219 08:24:29.951216 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602c50a6-ac7d-4df2-b4cb-78cf32e894e8" path="/var/lib/kubelet/pods/602c50a6-ac7d-4df2-b4cb-78cf32e894e8/volumes" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.063063 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.209639 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access\") pod \"693579ef-7bf3-47d2-85f3-493aea67e524\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.209969 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir\") pod \"693579ef-7bf3-47d2-85f3-493aea67e524\" (UID: \"693579ef-7bf3-47d2-85f3-493aea67e524\") " Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.210117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "693579ef-7bf3-47d2-85f3-493aea67e524" (UID: "693579ef-7bf3-47d2-85f3-493aea67e524"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.210335 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/693579ef-7bf3-47d2-85f3-493aea67e524-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.216481 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "693579ef-7bf3-47d2-85f3-493aea67e524" (UID: "693579ef-7bf3-47d2-85f3-493aea67e524"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.311793 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/693579ef-7bf3-47d2-85f3-493aea67e524-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.652590 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.657216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"693579ef-7bf3-47d2-85f3-493aea67e524","Type":"ContainerDied","Data":"d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea"} Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.657301 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e36bc5c23dff20479ec6f6b5629780694083d94d83876fd27c252a1ca492ea" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.660922 4780 generic.go:334] "Generic (PLEG): container finished" podID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerID="d5d5701f8ce3fb753fa73c4f31ce3fa4b3df2ead4822d0ad574cc081bde55356" exitCode=0 Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.661434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerDied","Data":"d5d5701f8ce3fb753fa73c4f31ce3fa4b3df2ead4822d0ad574cc081bde55356"} Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.662397 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:30 crc kubenswrapper[4780]: I0219 08:24:30.670676 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:32 crc kubenswrapper[4780]: I0219 08:24:32.683915 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerStarted","Data":"fe0a0bc4079db73285cf38ed5a12da0079d29764eeff963f6a75630bcf52bd8e"} Feb 19 08:24:32 crc kubenswrapper[4780]: I0219 08:24:32.713164 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pknb2" podStartSLOduration=4.471165657 podStartE2EDuration="56.713105735s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="2026-02-19 08:23:39.419950416 +0000 UTC m=+162.163607865" lastFinishedPulling="2026-02-19 08:24:31.661890494 +0000 UTC m=+214.405547943" observedRunningTime="2026-02-19 08:24:32.7114824 +0000 UTC m=+215.455139909" watchObservedRunningTime="2026-02-19 08:24:32.713105735 +0000 UTC m=+215.456763194" Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.336352 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.336821 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.336890 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.337683 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.337815 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797" gracePeriod=600 Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.711853 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797" exitCode=0 Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.711882 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797"} Feb 19 08:24:36 crc kubenswrapper[4780]: I0219 08:24:36.716262 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerStarted","Data":"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca"} Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.160549 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.160927 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.314466 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.725645 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0"} Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.729782 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerDied","Data":"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca"} Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.730814 4780 generic.go:334] "Generic (PLEG): container finished" podID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerID="a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca" exitCode=0 Feb 19 08:24:37 crc kubenswrapper[4780]: I0219 08:24:37.776908 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:24:38 crc kubenswrapper[4780]: I0219 08:24:38.740595 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerStarted","Data":"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c"} Feb 19 08:24:38 crc kubenswrapper[4780]: I0219 08:24:38.760912 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thv5l" podStartSLOduration=3.539152999 podStartE2EDuration="58.760888544s" podCreationTimestamp="2026-02-19 08:23:40 +0000 UTC" firstStartedPulling="2026-02-19 08:23:42.99774929 +0000 UTC m=+165.741406749" lastFinishedPulling="2026-02-19 08:24:38.219484845 +0000 UTC m=+220.963142294" observedRunningTime="2026-02-19 08:24:38.757334474 +0000 UTC m=+221.500991933" watchObservedRunningTime="2026-02-19 08:24:38.760888544 +0000 UTC m=+221.504545993" Feb 19 08:24:40 crc kubenswrapper[4780]: I0219 08:24:40.687440 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:40 crc kubenswrapper[4780]: I0219 08:24:40.687777 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:41 crc kubenswrapper[4780]: I0219 08:24:41.727328 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-thv5l" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="registry-server" probeResult="failure" output=< Feb 19 08:24:41 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 08:24:41 crc kubenswrapper[4780]: > Feb 19 08:24:41 crc kubenswrapper[4780]: I0219 08:24:41.759339 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerID="07ab2454109977801b84f46c1ce805f08ec5a56154f59ae4156f0b697fc19364" exitCode=0 Feb 19 08:24:41 crc kubenswrapper[4780]: I0219 08:24:41.759424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerDied","Data":"07ab2454109977801b84f46c1ce805f08ec5a56154f59ae4156f0b697fc19364"} Feb 19 08:24:41 crc kubenswrapper[4780]: I0219 08:24:41.761926 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerStarted","Data":"2c96a69880669a4aa867d919117eee9f7ec32dda338c9d39e1a4be59a2cb3c75"} Feb 19 08:24:42 crc kubenswrapper[4780]: I0219 08:24:42.768489 4780 generic.go:334] "Generic (PLEG): container finished" podID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerID="2c96a69880669a4aa867d919117eee9f7ec32dda338c9d39e1a4be59a2cb3c75" exitCode=0 Feb 19 08:24:42 crc kubenswrapper[4780]: I0219 08:24:42.769096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerDied","Data":"2c96a69880669a4aa867d919117eee9f7ec32dda338c9d39e1a4be59a2cb3c75"} Feb 19 08:24:50 crc kubenswrapper[4780]: I0219 08:24:50.769908 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:50 crc kubenswrapper[4780]: I0219 08:24:50.819103 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:51 crc kubenswrapper[4780]: I0219 08:24:51.003319 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:24:51 crc kubenswrapper[4780]: I0219 08:24:51.820809 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerStarted","Data":"19ca5fa236b0864cf4caad14ef6802f775fea98b21a0d116bbfc1fe0eb876a81"} Feb 19 08:24:51 crc kubenswrapper[4780]: I0219 08:24:51.833385 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thv5l" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="registry-server" containerID="cri-o://d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c" gracePeriod=2 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.236818 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.365470 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content\") pod \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.365605 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities\") pod \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.365859 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7rzb\" (UniqueName: \"kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb\") pod \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\" (UID: \"da9e2878-8ae0-4de1-89a9-f0c55ff84d18\") " Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.366584 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities" (OuterVolumeSpecName: "utilities") pod "da9e2878-8ae0-4de1-89a9-f0c55ff84d18" (UID: "da9e2878-8ae0-4de1-89a9-f0c55ff84d18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.371239 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb" (OuterVolumeSpecName: "kube-api-access-p7rzb") pod "da9e2878-8ae0-4de1-89a9-f0c55ff84d18" (UID: "da9e2878-8ae0-4de1-89a9-f0c55ff84d18"). InnerVolumeSpecName "kube-api-access-p7rzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.466863 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.466901 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7rzb\" (UniqueName: \"kubernetes.io/projected/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-kube-api-access-p7rzb\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.496582 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da9e2878-8ae0-4de1-89a9-f0c55ff84d18" (UID: "da9e2878-8ae0-4de1-89a9-f0c55ff84d18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.567896 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9e2878-8ae0-4de1-89a9-f0c55ff84d18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.840933 4780 generic.go:334] "Generic (PLEG): container finished" podID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerID="d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c" exitCode=0 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.841002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerDied","Data":"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.841061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv5l" event={"ID":"da9e2878-8ae0-4de1-89a9-f0c55ff84d18","Type":"ContainerDied","Data":"f6154e558e4845e998473759511283701cf79075e61df12032736ac06530bdbc"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.841086 4780 scope.go:117] "RemoveContainer" containerID="d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.841227 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv5l" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.844303 4780 generic.go:334] "Generic (PLEG): container finished" podID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerID="19ca5fa236b0864cf4caad14ef6802f775fea98b21a0d116bbfc1fe0eb876a81" exitCode=0 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.844397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerDied","Data":"19ca5fa236b0864cf4caad14ef6802f775fea98b21a0d116bbfc1fe0eb876a81"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.846188 4780 generic.go:334] "Generic (PLEG): container finished" podID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerID="939603631d36564e1f3fd13d81896c7c0a336e7348d3bdb631bb6f427b5a3663" exitCode=0 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.846245 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerDied","Data":"939603631d36564e1f3fd13d81896c7c0a336e7348d3bdb631bb6f427b5a3663"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.849974 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerStarted","Data":"20de50a66059022f101ea1ee2503910dd26baab8dab8ae56d19038ea231c69ac"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.857113 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerID="b19ee7e34d3a8cbef9bc8e00b718b33f7b5b6de36442de60f57f2266ef324ce8" exitCode=0 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.857198 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerDied","Data":"b19ee7e34d3a8cbef9bc8e00b718b33f7b5b6de36442de60f57f2266ef324ce8"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.859875 4780 scope.go:117] "RemoveContainer" containerID="a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.861486 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerStarted","Data":"6d0d77e665e1985ad87939b05c97190746a5f20f5bc65e91d50689d70b8d85fd"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.864543 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerID="14d02f5bbbb240ce7d6d1d07d58d91e0340c12b6a4717886631ba8fd74e3c340" exitCode=0 Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.864572 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerDied","Data":"14d02f5bbbb240ce7d6d1d07d58d91e0340c12b6a4717886631ba8fd74e3c340"} Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.923631 4780 scope.go:117] "RemoveContainer" containerID="fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.949304 4780 scope.go:117] "RemoveContainer" containerID="d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c" Feb 19 08:24:52 crc kubenswrapper[4780]: E0219 08:24:52.955102 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c\": container with ID starting with d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c not found: ID does not exist" containerID="d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.955161 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c"} err="failed to get container status \"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c\": rpc error: code = NotFound desc = could not find container \"d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c\": container with ID starting with d4077dff50ab359125218be15dffa4004ea7975a9d799b1e7051ac0a5e49dd6c not found: ID does not exist" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.955189 4780 scope.go:117] "RemoveContainer" containerID="a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca" Feb 19 08:24:52 crc kubenswrapper[4780]: E0219 08:24:52.958487 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca\": container with ID starting with a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca not found: ID does not exist" containerID="a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.958516 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca"} err="failed to get container status \"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca\": rpc error: code = NotFound desc = could not find container \"a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca\": container with ID starting with a5f60ef7c9256a1cf0d57ef84bd54e6dccd9e128d9bd1579ff1e325a25745eca not found: ID does not exist" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.958534 4780 scope.go:117] "RemoveContainer" containerID="fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e" Feb 19 08:24:52 crc kubenswrapper[4780]: E0219 08:24:52.963200 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e\": container with ID starting with fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e not found: ID does not exist" containerID="fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.963262 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e"} err="failed to get container status \"fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e\": rpc error: code = NotFound desc = could not find container \"fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e\": container with ID starting with fb626b4b72fc084023d12d8f35e74def99c32ed875f7d44ce3a2f71bae8b1f8e not found: ID does not exist" Feb 19 08:24:52 crc kubenswrapper[4780]: I0219 08:24:52.971599 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x5jxl" podStartSLOduration=4.287813384 podStartE2EDuration="1m14.971580923s" podCreationTimestamp="2026-02-19 08:23:38 +0000 UTC" firstStartedPulling="2026-02-19 08:23:40.82343456 +0000 UTC m=+163.567092009" lastFinishedPulling="2026-02-19 08:24:51.507202099 +0000 UTC m=+234.250859548" observedRunningTime="2026-02-19 08:24:52.954845632 +0000 UTC m=+235.698503081" watchObservedRunningTime="2026-02-19 08:24:52.971580923 +0000 UTC m=+235.715238372" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.004159 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfb9z" podStartSLOduration=3.861666673 podStartE2EDuration="1m16.004119768s" podCreationTimestamp="2026-02-19 08:23:37 +0000 UTC" firstStartedPulling="2026-02-19 08:23:39.420373119 +0000 UTC m=+162.164030568" lastFinishedPulling="2026-02-19 08:24:51.562826174 +0000 UTC m=+234.306483663" observedRunningTime="2026-02-19 08:24:53.00313254 +0000 UTC m=+235.746789989" watchObservedRunningTime="2026-02-19 08:24:53.004119768 +0000 UTC m=+235.747777237" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.012331 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.016667 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thv5l"] Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.877050 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerStarted","Data":"62a918f3b7e21f75f29902dfa8034e8fcef812ef115a999062ce1c0ece235506"} Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.879683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerStarted","Data":"221c04689c3c398131d201a1c167280e3fee570e69a0bace423bbd98dba66774"} Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.882851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerStarted","Data":"cdec0230b52d175e62a13b0106364993d22c29422278c0f2b7d89eccd63dbf55"} Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.886377 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerStarted","Data":"3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0"} Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.929099 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rw27" podStartSLOduration=3.978756458 podStartE2EDuration="1m17.929079998s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="2026-02-19 08:23:39.410179974 +0000 UTC m=+162.153837423" lastFinishedPulling="2026-02-19 08:24:53.360503504 +0000 UTC m=+236.104160963" observedRunningTime="2026-02-19 08:24:53.899380223 +0000 UTC m=+236.643037672" watchObservedRunningTime="2026-02-19 08:24:53.929079998 +0000 UTC m=+236.672737447" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.929903 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kbfwg" podStartSLOduration=4.624795123 podStartE2EDuration="1m14.929897881s" podCreationTimestamp="2026-02-19 08:23:39 +0000 UTC" firstStartedPulling="2026-02-19 08:23:42.984325626 +0000 UTC m=+165.727983075" lastFinishedPulling="2026-02-19 08:24:53.289428364 +0000 UTC m=+236.033085833" observedRunningTime="2026-02-19 08:24:53.924084208 +0000 UTC m=+236.667741657" watchObservedRunningTime="2026-02-19 08:24:53.929897881 +0000 UTC m=+236.673555330" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.945538 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" path="/var/lib/kubelet/pods/da9e2878-8ae0-4de1-89a9-f0c55ff84d18/volumes" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.949168 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhjsh" podStartSLOduration=3.079866553 podStartE2EDuration="1m17.949150193s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="2026-02-19 08:23:38.398400102 +0000 UTC m=+161.142057551" lastFinishedPulling="2026-02-19 08:24:53.267683732 +0000 UTC m=+236.011341191" observedRunningTime="2026-02-19 08:24:53.9465415 +0000 UTC m=+236.690198949" watchObservedRunningTime="2026-02-19 08:24:53.949150193 +0000 UTC m=+236.692807642" Feb 19 08:24:53 crc kubenswrapper[4780]: I0219 08:24:53.970631 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dr9mt" podStartSLOduration=2.4252955480000002 podStartE2EDuration="1m14.970611127s" podCreationTimestamp="2026-02-19 08:23:39 +0000 UTC" firstStartedPulling="2026-02-19 08:23:40.789503782 +0000 UTC m=+163.533161231" lastFinishedPulling="2026-02-19 08:24:53.334819361 +0000 UTC m=+236.078476810" observedRunningTime="2026-02-19 08:24:53.967460178 +0000 UTC m=+236.711117627" watchObservedRunningTime="2026-02-19 08:24:53.970611127 +0000 UTC m=+236.714268576" Feb 19 08:24:55 crc kubenswrapper[4780]: I0219 08:24:55.815011 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:55 crc kubenswrapper[4780]: I0219 08:24:55.815564 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" podUID="968a1eff-6d04-44fa-95e0-3d6f038e7750" containerName="controller-manager" containerID="cri-o://b0f470cc86ddc6d33ab043c14bf86a1d269d5277041cd18c56c6a47bc10d08e2" gracePeriod=30 Feb 19 08:24:55 crc kubenswrapper[4780]: I0219 08:24:55.917143 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:55 crc kubenswrapper[4780]: I0219 08:24:55.917478 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" podUID="00e524d8-47da-4797-ad52-8f28db57ff7a" containerName="route-controller-manager" containerID="cri-o://b805ec8d1ced1deb83430aa8c10f41f04a3b32b77e91a7b9d6309f92d49698cc" gracePeriod=30 Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.907514 4780 generic.go:334] "Generic (PLEG): container finished" podID="00e524d8-47da-4797-ad52-8f28db57ff7a" containerID="b805ec8d1ced1deb83430aa8c10f41f04a3b32b77e91a7b9d6309f92d49698cc" exitCode=0 Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.907642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" event={"ID":"00e524d8-47da-4797-ad52-8f28db57ff7a","Type":"ContainerDied","Data":"b805ec8d1ced1deb83430aa8c10f41f04a3b32b77e91a7b9d6309f92d49698cc"} Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.911577 4780 generic.go:334] "Generic (PLEG): container finished" podID="968a1eff-6d04-44fa-95e0-3d6f038e7750" containerID="b0f470cc86ddc6d33ab043c14bf86a1d269d5277041cd18c56c6a47bc10d08e2" exitCode=0 Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.911644 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" event={"ID":"968a1eff-6d04-44fa-95e0-3d6f038e7750","Type":"ContainerDied","Data":"b0f470cc86ddc6d33ab043c14bf86a1d269d5277041cd18c56c6a47bc10d08e2"} Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.947650 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:24:56 crc kubenswrapper[4780]: I0219 08:24:56.948145 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.000716 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.067668 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.071770 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config\") pod \"00e524d8-47da-4797-ad52-8f28db57ff7a\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.071852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca\") pod \"00e524d8-47da-4797-ad52-8f28db57ff7a\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.071887 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert\") pod \"00e524d8-47da-4797-ad52-8f28db57ff7a\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.072006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g4qv\" (UniqueName: \"kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv\") pod \"00e524d8-47da-4797-ad52-8f28db57ff7a\" (UID: \"00e524d8-47da-4797-ad52-8f28db57ff7a\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.072970 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "00e524d8-47da-4797-ad52-8f28db57ff7a" (UID: "00e524d8-47da-4797-ad52-8f28db57ff7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.073649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config" (OuterVolumeSpecName: "config") pod "00e524d8-47da-4797-ad52-8f28db57ff7a" (UID: "00e524d8-47da-4797-ad52-8f28db57ff7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.079561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00e524d8-47da-4797-ad52-8f28db57ff7a" (UID: "00e524d8-47da-4797-ad52-8f28db57ff7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.084390 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv" (OuterVolumeSpecName: "kube-api-access-9g4qv") pod "00e524d8-47da-4797-ad52-8f28db57ff7a" (UID: "00e524d8-47da-4797-ad52-8f28db57ff7a"). InnerVolumeSpecName "kube-api-access-9g4qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.105918 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:24:57 crc kubenswrapper[4780]: E0219 08:24:57.106237 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e524d8-47da-4797-ad52-8f28db57ff7a" containerName="route-controller-manager" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106259 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e524d8-47da-4797-ad52-8f28db57ff7a" containerName="route-controller-manager" Feb 19 08:24:57 crc kubenswrapper[4780]: E0219 08:24:57.106278 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="extract-utilities" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106287 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="extract-utilities" Feb 19 08:24:57 crc kubenswrapper[4780]: E0219 08:24:57.106305 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="extract-content" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106313 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="extract-content" Feb 19 08:24:57 crc kubenswrapper[4780]: E0219 08:24:57.106324 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693579ef-7bf3-47d2-85f3-493aea67e524" containerName="pruner" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106332 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="693579ef-7bf3-47d2-85f3-493aea67e524" containerName="pruner" Feb 19 08:24:57 crc kubenswrapper[4780]: E0219 08:24:57.106347 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="registry-server" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106354 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="registry-server" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106459 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="693579ef-7bf3-47d2-85f3-493aea67e524" containerName="pruner" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106468 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e2878-8ae0-4de1-89a9-f0c55ff84d18" containerName="registry-server" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106478 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e524d8-47da-4797-ad52-8f28db57ff7a" containerName="route-controller-manager" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.106909 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.126828 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.172854 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.172913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vgh5\" (UniqueName: \"kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.172993 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.173021 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.173065 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g4qv\" (UniqueName: \"kubernetes.io/projected/00e524d8-47da-4797-ad52-8f28db57ff7a-kube-api-access-9g4qv\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.173080 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.173092 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e524d8-47da-4797-ad52-8f28db57ff7a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.173105 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e524d8-47da-4797-ad52-8f28db57ff7a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.246165 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles\") pod \"968a1eff-6d04-44fa-95e0-3d6f038e7750\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274595 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config\") pod \"968a1eff-6d04-44fa-95e0-3d6f038e7750\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274621 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b95nf\" (UniqueName: \"kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf\") pod \"968a1eff-6d04-44fa-95e0-3d6f038e7750\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274679 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca\") pod \"968a1eff-6d04-44fa-95e0-3d6f038e7750\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274750 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert\") pod \"968a1eff-6d04-44fa-95e0-3d6f038e7750\" (UID: \"968a1eff-6d04-44fa-95e0-3d6f038e7750\") " Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vgh5\" (UniqueName: \"kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.274976 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.275001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.275531 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "968a1eff-6d04-44fa-95e0-3d6f038e7750" (UID: "968a1eff-6d04-44fa-95e0-3d6f038e7750"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.276588 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca" (OuterVolumeSpecName: "client-ca") pod "968a1eff-6d04-44fa-95e0-3d6f038e7750" (UID: "968a1eff-6d04-44fa-95e0-3d6f038e7750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.277108 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config" (OuterVolumeSpecName: "config") pod "968a1eff-6d04-44fa-95e0-3d6f038e7750" (UID: "968a1eff-6d04-44fa-95e0-3d6f038e7750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.277256 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.279852 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.283277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.283878 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "968a1eff-6d04-44fa-95e0-3d6f038e7750" (UID: "968a1eff-6d04-44fa-95e0-3d6f038e7750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.283901 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf" (OuterVolumeSpecName: "kube-api-access-b95nf") pod "968a1eff-6d04-44fa-95e0-3d6f038e7750" (UID: "968a1eff-6d04-44fa-95e0-3d6f038e7750"). InnerVolumeSpecName "kube-api-access-b95nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.296622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vgh5\" (UniqueName: \"kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5\") pod \"route-controller-manager-6cbf4d4c75-784kz\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.362854 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.362932 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.375681 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.375714 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968a1eff-6d04-44fa-95e0-3d6f038e7750-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.375728 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.375740 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968a1eff-6d04-44fa-95e0-3d6f038e7750-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.375750 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b95nf\" (UniqueName: \"kubernetes.io/projected/968a1eff-6d04-44fa-95e0-3d6f038e7750-kube-api-access-b95nf\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.399704 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.431327 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.625941 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.626018 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.689848 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.892764 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:24:57 crc kubenswrapper[4780]: W0219 08:24:57.902846 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod964f7b64_5b8b_412d_80f3_cf95e2430113.slice/crio-a0546927775386d207fb42c99c0d6fb8e7acb2f3364da37122052a70ca62ddd6 WatchSource:0}: Error finding container a0546927775386d207fb42c99c0d6fb8e7acb2f3364da37122052a70ca62ddd6: Status 404 returned error can't find the container with id a0546927775386d207fb42c99c0d6fb8e7acb2f3364da37122052a70ca62ddd6 Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.951482 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.971080 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh" event={"ID":"00e524d8-47da-4797-ad52-8f28db57ff7a","Type":"ContainerDied","Data":"ca42c20ac742a4988456e538404b08d1494b3f209612b6eee13bc7dbb09c0d17"} Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.971150 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" event={"ID":"964f7b64-5b8b-412d-80f3-cf95e2430113","Type":"ContainerStarted","Data":"a0546927775386d207fb42c99c0d6fb8e7acb2f3364da37122052a70ca62ddd6"} Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.971177 4780 scope.go:117] "RemoveContainer" containerID="b805ec8d1ced1deb83430aa8c10f41f04a3b32b77e91a7b9d6309f92d49698cc" Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.995655 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" event={"ID":"968a1eff-6d04-44fa-95e0-3d6f038e7750","Type":"ContainerDied","Data":"de82e09e70f7250aa10e25a8e81b0b3e02cc53712f34652494a8b7f8844ba3f6"} Feb 19 08:24:57 crc kubenswrapper[4780]: I0219 08:24:57.995698 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d76974-82gcd" Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.091089 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.091421 4780 scope.go:117] "RemoveContainer" containerID="b0f470cc86ddc6d33ab043c14bf86a1d269d5277041cd18c56c6a47bc10d08e2" Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.125221 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.128917 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f77f744c8-2s4wh"] Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.137218 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:58 crc kubenswrapper[4780]: I0219 08:24:58.141731 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d76974-82gcd"] Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.064793 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.127584 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.128502 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.249825 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.274843 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q9p69"] Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.538247 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.538322 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.585558 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.642644 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:24:59 crc kubenswrapper[4780]: E0219 08:24:59.643412 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968a1eff-6d04-44fa-95e0-3d6f038e7750" containerName="controller-manager" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.643438 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="968a1eff-6d04-44fa-95e0-3d6f038e7750" containerName="controller-manager" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.643562 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="968a1eff-6d04-44fa-95e0-3d6f038e7750" containerName="controller-manager" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.643976 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.653323 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.653323 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.653824 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.653910 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.654230 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.658841 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.659134 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.666872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.842246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.842356 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.842422 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.842454 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.842476 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shls\" (UniqueName: \"kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.943234 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.943279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.943297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8shls\" (UniqueName: \"kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.943343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.943373 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.944857 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.944864 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.945295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.946540 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e524d8-47da-4797-ad52-8f28db57ff7a" path="/var/lib/kubelet/pods/00e524d8-47da-4797-ad52-8f28db57ff7a/volumes" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.947342 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968a1eff-6d04-44fa-95e0-3d6f038e7750" path="/var/lib/kubelet/pods/968a1eff-6d04-44fa-95e0-3d6f038e7750/volumes" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.950636 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:24:59 crc kubenswrapper[4780]: I0219 08:24:59.966920 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8shls\" (UniqueName: \"kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls\") pod \"controller-manager-cdbdbf77c-2blgp\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.022840 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" event={"ID":"964f7b64-5b8b-412d-80f3-cf95e2430113","Type":"ContainerStarted","Data":"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a"} Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.082765 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.088576 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.117289 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.117352 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.267026 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.479252 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:25:00 crc kubenswrapper[4780]: W0219 08:25:00.488329 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode066fa01_d14b_4a6c_948b_522941afea4a.slice/crio-e80f31dc34f3ee0bce9a39c7397768d355976e197e093bb0cb20033f7709bc6a WatchSource:0}: Error finding container e80f31dc34f3ee0bce9a39c7397768d355976e197e093bb0cb20033f7709bc6a: Status 404 returned error can't find the container with id e80f31dc34f3ee0bce9a39c7397768d355976e197e093bb0cb20033f7709bc6a Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.807162 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:25:00 crc kubenswrapper[4780]: I0219 08:25:00.807803 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xfb9z" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="registry-server" containerID="cri-o://6d0d77e665e1985ad87939b05c97190746a5f20f5bc65e91d50689d70b8d85fd" gracePeriod=2 Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.030933 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" event={"ID":"e066fa01-d14b-4a6c-948b-522941afea4a","Type":"ContainerStarted","Data":"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257"} Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.030987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" event={"ID":"e066fa01-d14b-4a6c-948b-522941afea4a","Type":"ContainerStarted","Data":"e80f31dc34f3ee0bce9a39c7397768d355976e197e093bb0cb20033f7709bc6a"} Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.031527 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.039401 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.059986 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" podStartSLOduration=6.059961606 podStartE2EDuration="6.059961606s" podCreationTimestamp="2026-02-19 08:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:01.057825646 +0000 UTC m=+243.801483095" watchObservedRunningTime="2026-02-19 08:25:01.059961606 +0000 UTC m=+243.803619065" Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.084826 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" podStartSLOduration=6.084800404 podStartE2EDuration="6.084800404s" podCreationTimestamp="2026-02-19 08:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:01.083107527 +0000 UTC m=+243.826764976" watchObservedRunningTime="2026-02-19 08:25:01.084800404 +0000 UTC m=+243.828457853" Feb 19 08:25:01 crc kubenswrapper[4780]: I0219 08:25:01.167141 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kbfwg" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" probeResult="failure" output=< Feb 19 08:25:01 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 08:25:01 crc kubenswrapper[4780]: > Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.041957 4780 generic.go:334] "Generic (PLEG): container finished" podID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerID="6d0d77e665e1985ad87939b05c97190746a5f20f5bc65e91d50689d70b8d85fd" exitCode=0 Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.042012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerDied","Data":"6d0d77e665e1985ad87939b05c97190746a5f20f5bc65e91d50689d70b8d85fd"} Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.186624 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.375518 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsxvj\" (UniqueName: \"kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj\") pod \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.375651 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities\") pod \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.375681 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content\") pod \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\" (UID: \"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e\") " Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.376873 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities" (OuterVolumeSpecName: "utilities") pod "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" (UID: "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.392377 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj" (OuterVolumeSpecName: "kube-api-access-dsxvj") pod "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" (UID: "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e"). InnerVolumeSpecName "kube-api-access-dsxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.455735 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" (UID: "2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.476880 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.476908 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:02 crc kubenswrapper[4780]: I0219 08:25:02.476919 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsxvj\" (UniqueName: \"kubernetes.io/projected/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e-kube-api-access-dsxvj\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.055447 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfb9z" Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.055431 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfb9z" event={"ID":"2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e","Type":"ContainerDied","Data":"4fbe63911374cdf370617959601722a2817828cc990287c30cd255092e728fee"} Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.055544 4780 scope.go:117] "RemoveContainer" containerID="6d0d77e665e1985ad87939b05c97190746a5f20f5bc65e91d50689d70b8d85fd" Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.091114 4780 scope.go:117] "RemoveContainer" containerID="2c96a69880669a4aa867d919117eee9f7ec32dda338c9d39e1a4be59a2cb3c75" Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.113656 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.118613 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xfb9z"] Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.130868 4780 scope.go:117] "RemoveContainer" containerID="8b1d612784f534b113a1eb94b32d5d196d0a2b661ea08a3b1b8a0998a8ab2882" Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.203270 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr9mt"] Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.203570 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dr9mt" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="registry-server" containerID="cri-o://221c04689c3c398131d201a1c167280e3fee570e69a0bace423bbd98dba66774" gracePeriod=2 Feb 19 08:25:03 crc kubenswrapper[4780]: I0219 08:25:03.963395 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" path="/var/lib/kubelet/pods/2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e/volumes" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.898662 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.898965 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="extract-content" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.898981 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="extract-content" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.898989 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="extract-utilities" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.898996 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="extract-utilities" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.899003 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="registry-server" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.899010 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="registry-server" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.899159 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2052e1d5-3ae7-4acc-a14f-ae2ac2327e4e" containerName="registry-server" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.899906 4780 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900155 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900318 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b" gracePeriod=15 Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900359 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818" gracePeriod=15 Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900455 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be" gracePeriod=15 Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900464 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf" gracePeriod=15 Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.900513 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e" gracePeriod=15 Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904180 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904541 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904583 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904614 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904632 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904656 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904674 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904699 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904715 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904734 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904750 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904777 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904793 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:25:04 crc kubenswrapper[4780]: E0219 08:25:04.904826 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.904842 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905083 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905167 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905194 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905211 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905233 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.905691 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:25:04 crc kubenswrapper[4780]: I0219 08:25:04.955517 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020455 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020640 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020719 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020753 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.020830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122218 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122281 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122313 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122497 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122522 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122548 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122689 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122717 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.122744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.123657 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.123715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.123738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.123810 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.248036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.291658 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]log ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]api-openshift-apiserver-available ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]api-openshift-oauth-apiserver-available ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]informer-sync ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/priority-and-fairness-filter ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-apiextensions-informers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-apiextensions-controllers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/crd-informer-synced ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-system-namespaces-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/rbac/bootstrap-roles ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/bootstrap-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/start-kube-aggregator-informers ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-registration-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-discovery-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]autoregister-completion ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-openapi-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 19 08:25:05 crc kubenswrapper[4780]: [-]shutdown failed: reason withheld Feb 19 08:25:05 crc kubenswrapper[4780]: readyz check failed Feb 19 08:25:05 crc kubenswrapper[4780]: I0219 08:25:05.291760 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:25:05 crc kubenswrapper[4780]: W0219 08:25:05.300486 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-38b1c5923c1fed0171fbbd004e10997246a4b317d633f51651a8dd5ec87d755b WatchSource:0}: Error finding container 38b1c5923c1fed0171fbbd004e10997246a4b317d633f51651a8dd5ec87d755b: Status 404 returned error can't find the container with id 38b1c5923c1fed0171fbbd004e10997246a4b317d633f51651a8dd5ec87d755b Feb 19 08:25:06 crc kubenswrapper[4780]: I0219 08:25:06.079815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"38b1c5923c1fed0171fbbd004e10997246a4b317d633f51651a8dd5ec87d755b"} Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.091355 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.093940 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.095295 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf" exitCode=2 Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.099798 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerID="221c04689c3c398131d201a1c167280e3fee570e69a0bace423bbd98dba66774" exitCode=0 Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.099877 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerDied","Data":"221c04689c3c398131d201a1c167280e3fee570e69a0bace423bbd98dba66774"} Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.428711 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.430086 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.430899 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.431938 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.438639 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.439823 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.441612 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.445118 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.946847 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.947716 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: I0219 08:25:07.948684 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:07 crc kubenswrapper[4780]: E0219 08:25:07.984967 4780 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" volumeName="registry-storage" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.112690 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.115369 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.116689 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be" exitCode=0 Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.116744 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e" exitCode=0 Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.116839 4780 scope.go:117] "RemoveContainer" containerID="97511373d5ecfadc014ffde1a0659e114d87f040410f52421e0c41878feda566" Feb 19 08:25:08 crc kubenswrapper[4780]: E0219 08:25:08.295665 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895985271423a1f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:25:08.295064095 +0000 UTC m=+251.038721554,LastTimestamp:2026-02-19 08:25:08.295064095 +0000 UTC m=+251.038721554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.757538 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.758394 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.759223 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.759780 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.760227 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.897831 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkcqz\" (UniqueName: \"kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz\") pod \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.897927 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities\") pod \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.898114 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content\") pod \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\" (UID: \"4c1b6e16-c5ef-4858-af98-cf370809d4c8\") " Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.899649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities" (OuterVolumeSpecName: "utilities") pod "4c1b6e16-c5ef-4858-af98-cf370809d4c8" (UID: "4c1b6e16-c5ef-4858-af98-cf370809d4c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.904717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz" (OuterVolumeSpecName: "kube-api-access-fkcqz") pod "4c1b6e16-c5ef-4858-af98-cf370809d4c8" (UID: "4c1b6e16-c5ef-4858-af98-cf370809d4c8"). InnerVolumeSpecName "kube-api-access-fkcqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:08 crc kubenswrapper[4780]: I0219 08:25:08.928097 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c1b6e16-c5ef-4858-af98-cf370809d4c8" (UID: "4c1b6e16-c5ef-4858-af98-cf370809d4c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.000370 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.000423 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkcqz\" (UniqueName: \"kubernetes.io/projected/4c1b6e16-c5ef-4858-af98-cf370809d4c8-kube-api-access-fkcqz\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.000448 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1b6e16-c5ef-4858-af98-cf370809d4c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.066727 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.067693 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.068606 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.069272 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.070098 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.070202 4780 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.070589 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.135860 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr9mt" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.135916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr9mt" event={"ID":"4c1b6e16-c5ef-4858-af98-cf370809d4c8","Type":"ContainerDied","Data":"0bd76a525892fb9a739de7cf1ccf3118217c04ef6bf0f5e2bcfd4be03899c48f"} Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.136021 4780 scope.go:117] "RemoveContainer" containerID="221c04689c3c398131d201a1c167280e3fee570e69a0bace423bbd98dba66774" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.137105 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.137685 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.139461 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.140730 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.141615 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26"} Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.144115 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.144894 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.145360 4780 generic.go:334] "Generic (PLEG): container finished" podID="9247aadc-86ba-41f6-a36e-d0243cd52728" containerID="88c2d5012b1e751f8775708f3c12dc6745d42c979e82da09b42ae27843bf12ad" exitCode=0 Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.145473 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9247aadc-86ba-41f6-a36e-d0243cd52728","Type":"ContainerDied","Data":"88c2d5012b1e751f8775708f3c12dc6745d42c979e82da09b42ae27843bf12ad"} Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.145481 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.146245 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.146995 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.147596 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.148075 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.148593 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.149169 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.151504 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.153540 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818" exitCode=0 Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.153572 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b" exitCode=0 Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.164873 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.165403 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.165885 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.166421 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.167463 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.170993 4780 scope.go:117] "RemoveContainer" containerID="14d02f5bbbb240ce7d6d1d07d58d91e0340c12b6a4717886631ba8fd74e3c340" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.231081 4780 scope.go:117] "RemoveContainer" containerID="6708b7ead2cd0fff7c562845e98bc7b54bb63839d5f1cfbcfde78b51799267df" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.271975 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Feb 19 08:25:09 crc kubenswrapper[4780]: E0219 08:25:09.672886 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.792743 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.793847 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.794358 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.794780 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.795096 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.795528 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.796244 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.796553 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914431 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914548 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914558 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914600 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914623 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.914767 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.915064 4780 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.915088 4780 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:09 crc kubenswrapper[4780]: I0219 08:25:09.945693 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.016618 4780 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.174554 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.176693 4780 scope.go:117] "RemoveContainer" containerID="04fc583318472f9277abe4449a0921577cb369f952823e2d587f965c527a26be" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.176763 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.178077 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.178894 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.179570 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.180109 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.181735 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.182388 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.184629 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.185068 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.185736 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.186649 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.187209 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.187818 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.191577 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.192787 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.193408 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.193856 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.194492 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.195199 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.195705 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.196281 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.207108 4780 scope.go:117] "RemoveContainer" containerID="94f2da5f65034cb0921992c6e0a0e429e8dbb7ebbf2e7feaba157611c9dff818" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.236017 4780 scope.go:117] "RemoveContainer" containerID="a60a1b45246fc44099f1a836fe3388ccf6199bafff3cd2e41ab0f3bcc48b518e" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.262327 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.263240 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.263645 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.264019 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.264037 4780 scope.go:117] "RemoveContainer" containerID="a804ae7134091eea990f3a54f93fee6fea318005ee9b1aba6972459d601431bf" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.264486 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.264770 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.265030 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.265495 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.282956 4780 scope.go:117] "RemoveContainer" containerID="566e3d1cd90e69acc8c95bb3ecfeb967b2bde3e0c8dbb08a62eea3dcbbcaed8b" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.306188 4780 scope.go:117] "RemoveContainer" containerID="f9a6ff7b0d23d8d7f27b102d60cb4769d1a0d39e9a7988598c0118fb003e9957" Feb 19 08:25:10 crc kubenswrapper[4780]: E0219 08:25:10.473894 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.600933 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.602109 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.602764 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.603251 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.603617 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.604075 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.604495 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.604948 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.726940 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir\") pod \"9247aadc-86ba-41f6-a36e-d0243cd52728\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727100 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock\") pod \"9247aadc-86ba-41f6-a36e-d0243cd52728\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727209 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access\") pod \"9247aadc-86ba-41f6-a36e-d0243cd52728\" (UID: \"9247aadc-86ba-41f6-a36e-d0243cd52728\") " Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727185 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9247aadc-86ba-41f6-a36e-d0243cd52728" (UID: "9247aadc-86ba-41f6-a36e-d0243cd52728"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock" (OuterVolumeSpecName: "var-lock") pod "9247aadc-86ba-41f6-a36e-d0243cd52728" (UID: "9247aadc-86ba-41f6-a36e-d0243cd52728"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727602 4780 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.727629 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9247aadc-86ba-41f6-a36e-d0243cd52728-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.734252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9247aadc-86ba-41f6-a36e-d0243cd52728" (UID: "9247aadc-86ba-41f6-a36e-d0243cd52728"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:10 crc kubenswrapper[4780]: I0219 08:25:10.828720 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9247aadc-86ba-41f6-a36e-d0243cd52728-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.198194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9247aadc-86ba-41f6-a36e-d0243cd52728","Type":"ContainerDied","Data":"519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332"} Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.198265 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519b4f5d37604309ff353e42d78c33c240bc32a08cc652fd2d3087ccd036d332" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.198273 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.225263 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.225875 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.226427 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.226889 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.227405 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.227873 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[4780]: I0219 08:25:11.228362 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:12 crc kubenswrapper[4780]: E0219 08:25:12.075674 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Feb 19 08:25:15 crc kubenswrapper[4780]: E0219 08:25:15.277582 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="6.4s" Feb 19 08:25:17 crc kubenswrapper[4780]: E0219 08:25:17.699847 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895985271423a1f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:25:08.295064095 +0000 UTC m=+251.038721554,LastTimestamp:2026-02-19 08:25:08.295064095 +0000 UTC m=+251.038721554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.944739 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.945059 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.945621 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.946385 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.946740 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:17 crc kubenswrapper[4780]: I0219 08:25:17.947279 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.937587 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.938579 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.939320 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.939937 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.940496 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.940891 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.941475 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.960975 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.961013 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:18 crc kubenswrapper[4780]: E0219 08:25:18.961582 4780 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:18 crc kubenswrapper[4780]: I0219 08:25:18.962214 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:19 crc kubenswrapper[4780]: W0219 08:25:19.008785 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-187c5590a82ca1b00c2c95bf76271f8946051505c8c1d62682c688d7fa99bd86 WatchSource:0}: Error finding container 187c5590a82ca1b00c2c95bf76271f8946051505c8c1d62682c688d7fa99bd86: Status 404 returned error can't find the container with id 187c5590a82ca1b00c2c95bf76271f8946051505c8c1d62682c688d7fa99bd86 Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.255271 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.255362 4780 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451" exitCode=1 Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.255481 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451"} Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.256403 4780 scope.go:117] "RemoveContainer" containerID="280be782339bdc8b3ff0d0bc05aee1d6b316ff17b29d79dee87bda135ebea451" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.256782 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.256902 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"187c5590a82ca1b00c2c95bf76271f8946051505c8c1d62682c688d7fa99bd86"} Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.257623 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.258367 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.258951 4780 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.259714 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.260080 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:19 crc kubenswrapper[4780]: I0219 08:25:19.260575 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.266196 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.266357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d88f1c3d7991595ec3807abbc7882d35238e05df9543ea2aaef1134603fade4d"} Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.267862 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.268453 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.269180 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.269592 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.269746 4780 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4e3188a3eee91047d67d948767cf9d794abc959721dfe9fe2a76b33a32c6e84d" exitCode=0 Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.269792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4e3188a3eee91047d67d948767cf9d794abc959721dfe9fe2a76b33a32c6e84d"} Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.269938 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.270283 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.270325 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.270430 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: E0219 08:25:20.270798 4780 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.270965 4780 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.271589 4780 status_manager.go:851] "Failed to get status for pod" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" pod="openshift-marketplace/redhat-operators-kbfwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kbfwg\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.272089 4780 status_manager.go:851] "Failed to get status for pod" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6cbf4d4c75-784kz\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.272600 4780 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.272953 4780 status_manager.go:851] "Failed to get status for pod" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" pod="openshift-marketplace/certified-operators-6rw27" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6rw27\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.273456 4780 status_manager.go:851] "Failed to get status for pod" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" pod="openshift-marketplace/redhat-marketplace-dr9mt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dr9mt\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.274027 4780 status_manager.go:851] "Failed to get status for pod" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:20 crc kubenswrapper[4780]: I0219 08:25:20.274680 4780 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Feb 19 08:25:21 crc kubenswrapper[4780]: I0219 08:25:21.285537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"50ca524995233dd1ffd639e93bf5779853493652d74697d2fdf021566b48afcd"} Feb 19 08:25:21 crc kubenswrapper[4780]: I0219 08:25:21.288203 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"72558212a27ef2a1688359d6ef2865c8b97ab9878cad3755943821a9cafd9292"} Feb 19 08:25:21 crc kubenswrapper[4780]: I0219 08:25:21.288722 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ff69f2476090ff176e16f66dda4bd1c1e71b713530059da5afee4438fffe626"} Feb 19 08:25:22 crc kubenswrapper[4780]: I0219 08:25:22.305206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"721935b329cea2c3ef58bdbe5d5f8ce385051981fdbdd5002506f66c3d1fdc8a"} Feb 19 08:25:22 crc kubenswrapper[4780]: I0219 08:25:22.305465 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:22 crc kubenswrapper[4780]: I0219 08:25:22.306737 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:22 crc kubenswrapper[4780]: I0219 08:25:22.306708 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:22 crc kubenswrapper[4780]: I0219 08:25:22.307382 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"610f0e0d3243ca1c683e598a69ed8a586563fb76b9873619571034d3912f0431"} Feb 19 08:25:23 crc kubenswrapper[4780]: I0219 08:25:23.480272 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:25:23 crc kubenswrapper[4780]: I0219 08:25:23.485429 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:25:23 crc kubenswrapper[4780]: I0219 08:25:23.962987 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:23 crc kubenswrapper[4780]: I0219 08:25:23.963052 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:23 crc kubenswrapper[4780]: I0219 08:25:23.971266 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:24 crc kubenswrapper[4780]: I0219 08:25:24.304171 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerName="oauth-openshift" containerID="cri-o://423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea" gracePeriod=15 Feb 19 08:25:24 crc kubenswrapper[4780]: I0219 08:25:24.321480 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:25:24 crc kubenswrapper[4780]: I0219 08:25:24.950911 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139571 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139679 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139720 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139757 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139809 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139854 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.139976 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.140968 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141151 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141303 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141452 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141511 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141583 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svq4c\" (UniqueName: \"kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c\") pod \"66917461-2afb-4a36-83fe-4ff8a0be77f8\" (UID: \"66917461-2afb-4a36-83fe-4ff8a0be77f8\") " Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.141956 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.142003 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.142029 4780 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.142543 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.144537 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.148562 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.151371 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.151435 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c" (OuterVolumeSpecName: "kube-api-access-svq4c") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "kube-api-access-svq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.151512 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.151797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.152401 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.152516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.153029 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.154745 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "66917461-2afb-4a36-83fe-4ff8a0be77f8" (UID: "66917461-2afb-4a36-83fe-4ff8a0be77f8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243485 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243541 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243562 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243581 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243652 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243674 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243692 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243712 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243734 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66917461-2afb-4a36-83fe-4ff8a0be77f8-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243750 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svq4c\" (UniqueName: \"kubernetes.io/projected/66917461-2afb-4a36-83fe-4ff8a0be77f8-kube-api-access-svq4c\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.243768 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66917461-2afb-4a36-83fe-4ff8a0be77f8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.330919 4780 generic.go:334] "Generic (PLEG): container finished" podID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerID="423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea" exitCode=0 Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.331018 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.331064 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" event={"ID":"66917461-2afb-4a36-83fe-4ff8a0be77f8","Type":"ContainerDied","Data":"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea"} Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.331192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q9p69" event={"ID":"66917461-2afb-4a36-83fe-4ff8a0be77f8","Type":"ContainerDied","Data":"f21246dff2bc491e8802638462191f465e73ab19e9156a75758dabea1b6a39e9"} Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.331220 4780 scope.go:117] "RemoveContainer" containerID="423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.367617 4780 scope.go:117] "RemoveContainer" containerID="423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea" Feb 19 08:25:25 crc kubenswrapper[4780]: E0219 08:25:25.368766 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea\": container with ID starting with 423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea not found: ID does not exist" containerID="423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea" Feb 19 08:25:25 crc kubenswrapper[4780]: I0219 08:25:25.368824 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea"} err="failed to get container status \"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea\": rpc error: code = NotFound desc = could not find container \"423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea\": container with ID starting with 423440a8c20bf9234d4ffdfc8999de1eb8fc70ca9d638b71a4043bb7860d2cea not found: ID does not exist" Feb 19 08:25:27 crc kubenswrapper[4780]: I0219 08:25:27.418838 4780 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:27 crc kubenswrapper[4780]: I0219 08:25:27.959705 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="43bc8b41-f0c5-41c8-b591-ce3101c104a9" Feb 19 08:25:28 crc kubenswrapper[4780]: E0219 08:25:28.182416 4780 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Feb 19 08:25:28 crc kubenswrapper[4780]: I0219 08:25:28.349804 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:28 crc kubenswrapper[4780]: I0219 08:25:28.349857 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="33abc105-8bb0-4564-a24f-210e18813bca" Feb 19 08:25:28 crc kubenswrapper[4780]: I0219 08:25:28.353300 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="43bc8b41-f0c5-41c8-b591-ce3101c104a9" Feb 19 08:25:35 crc kubenswrapper[4780]: I0219 08:25:35.412564 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:25:37 crc kubenswrapper[4780]: I0219 08:25:37.200404 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 08:25:37 crc kubenswrapper[4780]: I0219 08:25:37.645973 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 08:25:37 crc kubenswrapper[4780]: I0219 08:25:37.829660 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.073762 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.204770 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.251850 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.436024 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.450340 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.577648 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.904311 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 08:25:38 crc kubenswrapper[4780]: I0219 08:25:38.950451 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.032718 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.124108 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.373395 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.392692 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.441833 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.583439 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.603608 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.609041 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.633954 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.712184 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.725623 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.799349 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.834841 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 08:25:39 crc kubenswrapper[4780]: I0219 08:25:39.909323 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.089191 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.173977 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.189279 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.228245 4780 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.232508 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.232484132 podStartE2EDuration="36.232484132s" podCreationTimestamp="2026-02-19 08:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:27.439431532 +0000 UTC m=+270.183089021" watchObservedRunningTime="2026-02-19 08:25:40.232484132 +0000 UTC m=+282.976141621" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.236663 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr9mt","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-q9p69"] Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.236755 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.242719 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.294511 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.294492944 podStartE2EDuration="13.294492944s" podCreationTimestamp="2026-02-19 08:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:40.260958213 +0000 UTC m=+283.004615692" watchObservedRunningTime="2026-02-19 08:25:40.294492944 +0000 UTC m=+283.038150393" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.306006 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.391516 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.452675 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.521671 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.619799 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.780110 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.789477 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.866715 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 08:25:40 crc kubenswrapper[4780]: I0219 08:25:40.920671 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.161559 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.334173 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.373234 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.374027 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.384977 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.507258 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.533048 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.564281 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.630068 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.723343 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.775387 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.870042 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.933191 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.954861 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" path="/var/lib/kubelet/pods/4c1b6e16-c5ef-4858-af98-cf370809d4c8/volumes" Feb 19 08:25:41 crc kubenswrapper[4780]: I0219 08:25:41.957417 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" path="/var/lib/kubelet/pods/66917461-2afb-4a36-83fe-4ff8a0be77f8/volumes" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.044999 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.228891 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.241593 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.354511 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.370751 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.399251 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.476590 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.546851 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.639613 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.734110 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.798924 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.835716 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.935446 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 08:25:42 crc kubenswrapper[4780]: I0219 08:25:42.975358 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.006721 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.069645 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.104453 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.193059 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.238609 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.371081 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.527893 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.533546 4780 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.544515 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.660201 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.665371 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.760835 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.786566 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.810606 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.861743 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.890283 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.957156 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:25:43 crc kubenswrapper[4780]: I0219 08:25:43.976510 4780 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.155007 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.155141 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.191920 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.193491 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.211817 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.331669 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.384455 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.427894 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.440499 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.480754 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.528069 4780 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.712716 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.758438 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:44 crc kubenswrapper[4780]: I0219 08:25:44.956662 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.032257 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.066818 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.077964 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.141692 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.172202 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.178759 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.274016 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.278517 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.338960 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.383082 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.429246 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.447703 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.476244 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.565888 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.605960 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.607264 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.618221 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.636292 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.711944 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.739973 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.814589 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.898725 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 08:25:45 crc kubenswrapper[4780]: I0219 08:25:45.919378 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.073885 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.081658 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.128341 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.161873 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.199107 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.344793 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.373574 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.392536 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.430450 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.455906 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.583093 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.601647 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.647486 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.652805 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.760974 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.773072 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.793317 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.813005 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.838673 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.872942 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.967571 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 08:25:46 crc kubenswrapper[4780]: I0219 08:25:46.969349 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.020931 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.037090 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.101237 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.150869 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.251355 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.314550 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.343664 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.375493 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.443315 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.636747 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.701534 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.763357 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.825530 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.827796 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 08:25:47 crc kubenswrapper[4780]: I0219 08:25:47.883429 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.063241 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.293519 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.592011 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.622995 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.735495 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.737527 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.772060 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.776495 4780 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.834401 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.846689 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.887788 4780 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:25:48 crc kubenswrapper[4780]: I0219 08:25:48.888303 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26" gracePeriod=5 Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.000188 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.007372 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.018827 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.034766 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.107011 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.155715 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.170329 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.171214 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.357353 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.387716 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.452603 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.507451 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.639506 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.640576 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.824639 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.875467 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:25:49 crc kubenswrapper[4780]: I0219 08:25:49.924176 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.083631 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.210685 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.332095 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.341999 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.342705 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.383979 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.522819 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.523297 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.565183 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.589303 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.656599 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.703458 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.797991 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.851369 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.860369 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.911319 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.964033 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 08:25:50 crc kubenswrapper[4780]: I0219 08:25:50.986622 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.214235 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.248379 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.461636 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.604662 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.622375 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.825036 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.833606 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.899154 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 08:25:51 crc kubenswrapper[4780]: I0219 08:25:51.917557 4780 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.070011 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.207407 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.353240 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.434375 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.581063 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.582166 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.626933 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 08:25:52 crc kubenswrapper[4780]: I0219 08:25:52.807635 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.003954 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.119642 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.161081 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.272846 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.418783 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.495985 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.547605 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.614302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.635936 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.650364 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 08:25:53 crc kubenswrapper[4780]: I0219 08:25:53.933253 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.032441 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.108436 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.383000 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.478291 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.506011 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.509248 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.509378 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.512407 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.573106 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.573241 4780 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26" exitCode=137 Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.573320 4780 scope.go:117] "RemoveContainer" containerID="cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.573347 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.609772 4780 scope.go:117] "RemoveContainer" containerID="cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26" Feb 19 08:25:54 crc kubenswrapper[4780]: E0219 08:25:54.610213 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26\": container with ID starting with cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26 not found: ID does not exist" containerID="cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.610265 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26"} err="failed to get container status \"cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26\": rpc error: code = NotFound desc = could not find container \"cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26\": container with ID starting with cceb1eb7d3ea4bf13cc55ba69a1a5898a96f6ccbd392f83352ffe7b943fb7c26 not found: ID does not exist" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661506 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661618 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661750 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661801 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661910 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.661992 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662060 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662223 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662452 4780 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662485 4780 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662504 4780 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.662520 4780 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.673252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.886190 4780 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:54 crc kubenswrapper[4780]: I0219 08:25:54.888836 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.527345 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.845312 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.845629 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" podUID="e066fa01-d14b-4a6c-948b-522941afea4a" containerName="controller-manager" containerID="cri-o://9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257" gracePeriod=30 Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.950494 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.951198 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.966273 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.966322 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.966335 4780 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d1542164-0395-4010-80de-374cc744c3bd" Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.966516 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" containerName="route-controller-manager" containerID="cri-o://69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a" gracePeriod=30 Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.972332 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:25:55 crc kubenswrapper[4780]: I0219 08:25:55.972372 4780 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d1542164-0395-4010-80de-374cc744c3bd" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.374634 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.378079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.409533 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca\") pod \"964f7b64-5b8b-412d-80f3-cf95e2430113\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410805 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert\") pod \"e066fa01-d14b-4a6c-948b-522941afea4a\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410859 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8shls\" (UniqueName: \"kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls\") pod \"e066fa01-d14b-4a6c-948b-522941afea4a\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410902 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles\") pod \"e066fa01-d14b-4a6c-948b-522941afea4a\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410948 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vgh5\" (UniqueName: \"kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5\") pod \"964f7b64-5b8b-412d-80f3-cf95e2430113\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410999 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert\") pod \"964f7b64-5b8b-412d-80f3-cf95e2430113\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.411031 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config\") pod \"964f7b64-5b8b-412d-80f3-cf95e2430113\" (UID: \"964f7b64-5b8b-412d-80f3-cf95e2430113\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.411073 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca\") pod \"e066fa01-d14b-4a6c-948b-522941afea4a\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.411153 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config\") pod \"e066fa01-d14b-4a6c-948b-522941afea4a\" (UID: \"e066fa01-d14b-4a6c-948b-522941afea4a\") " Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.410670 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca" (OuterVolumeSpecName: "client-ca") pod "964f7b64-5b8b-412d-80f3-cf95e2430113" (UID: "964f7b64-5b8b-412d-80f3-cf95e2430113"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.412831 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e066fa01-d14b-4a6c-948b-522941afea4a" (UID: "e066fa01-d14b-4a6c-948b-522941afea4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.415546 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e066fa01-d14b-4a6c-948b-522941afea4a" (UID: "e066fa01-d14b-4a6c-948b-522941afea4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.416112 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config" (OuterVolumeSpecName: "config") pod "964f7b64-5b8b-412d-80f3-cf95e2430113" (UID: "964f7b64-5b8b-412d-80f3-cf95e2430113"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.414237 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config" (OuterVolumeSpecName: "config") pod "e066fa01-d14b-4a6c-948b-522941afea4a" (UID: "e066fa01-d14b-4a6c-948b-522941afea4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.419416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e066fa01-d14b-4a6c-948b-522941afea4a" (UID: "e066fa01-d14b-4a6c-948b-522941afea4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.420081 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5" (OuterVolumeSpecName: "kube-api-access-7vgh5") pod "964f7b64-5b8b-412d-80f3-cf95e2430113" (UID: "964f7b64-5b8b-412d-80f3-cf95e2430113"). InnerVolumeSpecName "kube-api-access-7vgh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.421079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls" (OuterVolumeSpecName: "kube-api-access-8shls") pod "e066fa01-d14b-4a6c-948b-522941afea4a" (UID: "e066fa01-d14b-4a6c-948b-522941afea4a"). InnerVolumeSpecName "kube-api-access-8shls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.423405 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "964f7b64-5b8b-412d-80f3-cf95e2430113" (UID: "964f7b64-5b8b-412d-80f3-cf95e2430113"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513190 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513235 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513248 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513260 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/964f7b64-5b8b-412d-80f3-cf95e2430113-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513272 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e066fa01-d14b-4a6c-948b-522941afea4a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513285 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8shls\" (UniqueName: \"kubernetes.io/projected/e066fa01-d14b-4a6c-948b-522941afea4a-kube-api-access-8shls\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513299 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e066fa01-d14b-4a6c-948b-522941afea4a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513313 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vgh5\" (UniqueName: \"kubernetes.io/projected/964f7b64-5b8b-412d-80f3-cf95e2430113-kube-api-access-7vgh5\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.513324 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/964f7b64-5b8b-412d-80f3-cf95e2430113-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.590370 4780 generic.go:334] "Generic (PLEG): container finished" podID="e066fa01-d14b-4a6c-948b-522941afea4a" containerID="9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257" exitCode=0 Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.590499 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.591311 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" event={"ID":"e066fa01-d14b-4a6c-948b-522941afea4a","Type":"ContainerDied","Data":"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257"} Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.592032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cdbdbf77c-2blgp" event={"ID":"e066fa01-d14b-4a6c-948b-522941afea4a","Type":"ContainerDied","Data":"e80f31dc34f3ee0bce9a39c7397768d355976e197e093bb0cb20033f7709bc6a"} Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.592187 4780 scope.go:117] "RemoveContainer" containerID="9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.595050 4780 generic.go:334] "Generic (PLEG): container finished" podID="964f7b64-5b8b-412d-80f3-cf95e2430113" containerID="69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a" exitCode=0 Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.595096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" event={"ID":"964f7b64-5b8b-412d-80f3-cf95e2430113","Type":"ContainerDied","Data":"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a"} Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.595165 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" event={"ID":"964f7b64-5b8b-412d-80f3-cf95e2430113","Type":"ContainerDied","Data":"a0546927775386d207fb42c99c0d6fb8e7acb2f3364da37122052a70ca62ddd6"} Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.595245 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.624919 4780 scope.go:117] "RemoveContainer" containerID="9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.625651 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257\": container with ID starting with 9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257 not found: ID does not exist" containerID="9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.625711 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257"} err="failed to get container status \"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257\": rpc error: code = NotFound desc = could not find container \"9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257\": container with ID starting with 9fbb7cdd4bc8b6106e8f34e5a1a52c3f6c469066bcfacf263b11cafed40f4257 not found: ID does not exist" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.625755 4780 scope.go:117] "RemoveContainer" containerID="69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.638445 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.650011 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cdbdbf77c-2blgp"] Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.659352 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.665861 4780 scope.go:117] "RemoveContainer" containerID="69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.666559 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a\": container with ID starting with 69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a not found: ID does not exist" containerID="69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.666625 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a"} err="failed to get container status \"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a\": rpc error: code = NotFound desc = could not find container \"69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a\": container with ID starting with 69856842ef42b1352a671cde2028ed9a9cfb410f6f52e89e8dccda623526a84a not found: ID does not exist" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.667263 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cbf4d4c75-784kz"] Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.686814 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-klfvp"] Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.687312 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="registry-server" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.687415 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="registry-server" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.687498 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e066fa01-d14b-4a6c-948b-522941afea4a" containerName="controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.687569 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e066fa01-d14b-4a6c-948b-522941afea4a" containerName="controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.687655 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerName="oauth-openshift" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.687726 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerName="oauth-openshift" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.687807 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" containerName="installer" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.687879 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" containerName="installer" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.687954 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="extract-utilities" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688023 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="extract-utilities" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.688108 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="extract-content" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688204 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="extract-content" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.688294 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" containerName="route-controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688374 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" containerName="route-controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: E0219 08:25:56.688451 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688545 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688722 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688810 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" containerName="route-controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688900 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917461-2afb-4a36-83fe-4ff8a0be77f8" containerName="oauth-openshift" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.688982 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9247aadc-86ba-41f6-a36e-d0243cd52728" containerName="installer" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.689070 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1b6e16-c5ef-4858-af98-cf370809d4c8" containerName="registry-server" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.689169 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e066fa01-d14b-4a6c-948b-522941afea4a" containerName="controller-manager" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.689713 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.695387 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.695741 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.696000 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.696254 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.696446 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.696719 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.697060 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.697715 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.697757 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.697935 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.698679 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.701208 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.714911 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.714987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715075 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpwz\" (UniqueName: \"kubernetes.io/projected/b7123d8d-20ca-497b-bba9-f66046e0faa7-kube-api-access-kgpwz\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715204 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-policies\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715378 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-dir\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715648 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.715721 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.720597 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-klfvp"] Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.721527 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.721746 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.731904 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816636 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816689 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-policies\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816726 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-dir\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816850 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816900 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.816955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817012 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817046 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817120 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpwz\" (UniqueName: \"kubernetes.io/projected/b7123d8d-20ca-497b-bba9-f66046e0faa7-kube-api-access-kgpwz\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.817476 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-dir\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.818325 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-audit-policies\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.819568 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.820869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.822174 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.823644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.823646 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.823797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.824854 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.825294 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.826654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.827252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.827480 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7123d8d-20ca-497b-bba9-f66046e0faa7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:56 crc kubenswrapper[4780]: I0219 08:25:56.846762 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpwz\" (UniqueName: \"kubernetes.io/projected/b7123d8d-20ca-497b-bba9-f66046e0faa7-kube-api-access-kgpwz\") pod \"oauth-openshift-59cd769dfc-klfvp\" (UID: \"b7123d8d-20ca-497b-bba9-f66046e0faa7\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.026880 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.328490 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-klfvp"] Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.608060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" event={"ID":"b7123d8d-20ca-497b-bba9-f66046e0faa7","Type":"ContainerStarted","Data":"8cc69c491e6bd66b38aa78049285292e11b014e0b9418c2905591c33e4bc979d"} Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.608111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" event={"ID":"b7123d8d-20ca-497b-bba9-f66046e0faa7","Type":"ContainerStarted","Data":"8f17244a84db9d0ff862ee3947aa4fbfbbe61646d3312d4cf4e748520d0c7da3"} Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.608575 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.612708 4780 patch_prober.go:28] interesting pod/oauth-openshift-59cd769dfc-klfvp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.612772 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" podUID="b7123d8d-20ca-497b-bba9-f66046e0faa7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.631871 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" podStartSLOduration=58.631847749 podStartE2EDuration="58.631847749s" podCreationTimestamp="2026-02-19 08:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:57.629099704 +0000 UTC m=+300.372757183" watchObservedRunningTime="2026-02-19 08:25:57.631847749 +0000 UTC m=+300.375505238" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.683455 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.684582 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.687913 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.688349 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.689395 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.689793 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.690765 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.692276 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.693502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.693964 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.700595 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.700602 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.700813 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.700923 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.701073 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.701187 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.704913 4780 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.705771 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.709317 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.712997 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737484 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737603 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737702 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4xm\" (UniqueName: \"kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnf5n\" (UniqueName: \"kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.737878 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.738246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.738282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840578 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840700 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840782 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.840988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842148 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842465 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842782 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4xm\" (UniqueName: \"kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnf5n\" (UniqueName: \"kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842845 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.842864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.843743 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.845690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.849762 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.850626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.875046 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.881570 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.884524 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.890958 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.899173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnf5n\" (UniqueName: \"kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n\") pod \"controller-manager-5594b7978f-ngcvj\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.909283 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4xm\" (UniqueName: \"kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm\") pod \"route-controller-manager-7ccc5c98b4-tl784\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.954075 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964f7b64-5b8b-412d-80f3-cf95e2430113" path="/var/lib/kubelet/pods/964f7b64-5b8b-412d-80f3-cf95e2430113/volumes" Feb 19 08:25:57 crc kubenswrapper[4780]: I0219 08:25:57.956807 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e066fa01-d14b-4a6c-948b-522941afea4a" path="/var/lib/kubelet/pods/e066fa01-d14b-4a6c-948b-522941afea4a/volumes" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.013530 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.021726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.028525 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.036239 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.300466 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:25:58 crc kubenswrapper[4780]: W0219 08:25:58.307621 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b1e00d7_d6e0_43af_9605_ebb7fcb52548.slice/crio-81e5fafaa469121327d9d688a97c997d06f30bad3d47dc0fcfab6911f6515d3f WatchSource:0}: Error finding container 81e5fafaa469121327d9d688a97c997d06f30bad3d47dc0fcfab6911f6515d3f: Status 404 returned error can't find the container with id 81e5fafaa469121327d9d688a97c997d06f30bad3d47dc0fcfab6911f6515d3f Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.565370 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.618404 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" event={"ID":"a56ff006-b612-4081-b80b-aa31ad27d2c4","Type":"ContainerStarted","Data":"3c49d19134a714f0283e5610374f7c5365bf3d8b8aea3a917870f7a2ca63ce05"} Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.622467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" event={"ID":"0b1e00d7-d6e0-43af-9605-ebb7fcb52548","Type":"ContainerStarted","Data":"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969"} Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.622527 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" event={"ID":"0b1e00d7-d6e0-43af-9605-ebb7fcb52548","Type":"ContainerStarted","Data":"81e5fafaa469121327d9d688a97c997d06f30bad3d47dc0fcfab6911f6515d3f"} Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.622561 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.627673 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59cd769dfc-klfvp" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.632218 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:25:58 crc kubenswrapper[4780]: I0219 08:25:58.650840 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" podStartSLOduration=3.650820427 podStartE2EDuration="3.650820427s" podCreationTimestamp="2026-02-19 08:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:58.645897362 +0000 UTC m=+301.389554851" watchObservedRunningTime="2026-02-19 08:25:58.650820427 +0000 UTC m=+301.394477886" Feb 19 08:25:59 crc kubenswrapper[4780]: I0219 08:25:59.629233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" event={"ID":"a56ff006-b612-4081-b80b-aa31ad27d2c4","Type":"ContainerStarted","Data":"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e"} Feb 19 08:25:59 crc kubenswrapper[4780]: I0219 08:25:59.631339 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:59 crc kubenswrapper[4780]: I0219 08:25:59.645239 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:25:59 crc kubenswrapper[4780]: I0219 08:25:59.736481 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" podStartSLOduration=4.736464525 podStartE2EDuration="4.736464525s" podCreationTimestamp="2026-02-19 08:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:59.704771535 +0000 UTC m=+302.448428984" watchObservedRunningTime="2026-02-19 08:25:59.736464525 +0000 UTC m=+302.480121974" Feb 19 08:26:08 crc kubenswrapper[4780]: I0219 08:26:08.690269 4780 generic.go:334] "Generic (PLEG): container finished" podID="f841ab7c-b591-480d-8c4a-70003c08e679" containerID="2d4a12b15001128752f6ecdaf4ade6b48319237518cb4ed1e2ff07d817fcec2d" exitCode=0 Feb 19 08:26:08 crc kubenswrapper[4780]: I0219 08:26:08.690408 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerDied","Data":"2d4a12b15001128752f6ecdaf4ade6b48319237518cb4ed1e2ff07d817fcec2d"} Feb 19 08:26:08 crc kubenswrapper[4780]: I0219 08:26:08.691352 4780 scope.go:117] "RemoveContainer" containerID="2d4a12b15001128752f6ecdaf4ade6b48319237518cb4ed1e2ff07d817fcec2d" Feb 19 08:26:09 crc kubenswrapper[4780]: I0219 08:26:09.703356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerStarted","Data":"bbb24494e1d07bba67f3ee8c6f2d52ef2e1c38d80a4b734c5d555611d8394260"} Feb 19 08:26:09 crc kubenswrapper[4780]: I0219 08:26:09.705473 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:26:09 crc kubenswrapper[4780]: I0219 08:26:09.707695 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:26:15 crc kubenswrapper[4780]: I0219 08:26:15.868234 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:26:15 crc kubenswrapper[4780]: I0219 08:26:15.868985 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" podUID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" containerName="controller-manager" containerID="cri-o://6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969" gracePeriod=30 Feb 19 08:26:15 crc kubenswrapper[4780]: I0219 08:26:15.884272 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:26:15 crc kubenswrapper[4780]: I0219 08:26:15.884762 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" podUID="a56ff006-b612-4081-b80b-aa31ad27d2c4" containerName="route-controller-manager" containerID="cri-o://3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e" gracePeriod=30 Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.473907 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.479743 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.563327 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config\") pod \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.563457 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles\") pod \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.564339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b1e00d7-d6e0-43af-9605-ebb7fcb52548" (UID: "0b1e00d7-d6e0-43af-9605-ebb7fcb52548"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.564476 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config" (OuterVolumeSpecName: "config") pod "0b1e00d7-d6e0-43af-9605-ebb7fcb52548" (UID: "0b1e00d7-d6e0-43af-9605-ebb7fcb52548"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565397 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn4xm\" (UniqueName: \"kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm\") pod \"a56ff006-b612-4081-b80b-aa31ad27d2c4\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565467 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnf5n\" (UniqueName: \"kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n\") pod \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert\") pod \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565545 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert\") pod \"a56ff006-b612-4081-b80b-aa31ad27d2c4\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565592 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca\") pod \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\" (UID: \"0b1e00d7-d6e0-43af-9605-ebb7fcb52548\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565659 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca\") pod \"a56ff006-b612-4081-b80b-aa31ad27d2c4\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.565716 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config\") pod \"a56ff006-b612-4081-b80b-aa31ad27d2c4\" (UID: \"a56ff006-b612-4081-b80b-aa31ad27d2c4\") " Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.566051 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.566077 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.566252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b1e00d7-d6e0-43af-9605-ebb7fcb52548" (UID: "0b1e00d7-d6e0-43af-9605-ebb7fcb52548"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.566571 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a56ff006-b612-4081-b80b-aa31ad27d2c4" (UID: "a56ff006-b612-4081-b80b-aa31ad27d2c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.566640 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config" (OuterVolumeSpecName: "config") pod "a56ff006-b612-4081-b80b-aa31ad27d2c4" (UID: "a56ff006-b612-4081-b80b-aa31ad27d2c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.572365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b1e00d7-d6e0-43af-9605-ebb7fcb52548" (UID: "0b1e00d7-d6e0-43af-9605-ebb7fcb52548"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.572374 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a56ff006-b612-4081-b80b-aa31ad27d2c4" (UID: "a56ff006-b612-4081-b80b-aa31ad27d2c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.572451 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm" (OuterVolumeSpecName: "kube-api-access-hn4xm") pod "a56ff006-b612-4081-b80b-aa31ad27d2c4" (UID: "a56ff006-b612-4081-b80b-aa31ad27d2c4"). InnerVolumeSpecName "kube-api-access-hn4xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.572454 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n" (OuterVolumeSpecName: "kube-api-access-dnf5n") pod "0b1e00d7-d6e0-43af-9605-ebb7fcb52548" (UID: "0b1e00d7-d6e0-43af-9605-ebb7fcb52548"). InnerVolumeSpecName "kube-api-access-dnf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667107 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn4xm\" (UniqueName: \"kubernetes.io/projected/a56ff006-b612-4081-b80b-aa31ad27d2c4-kube-api-access-hn4xm\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667176 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnf5n\" (UniqueName: \"kubernetes.io/projected/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-kube-api-access-dnf5n\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667187 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667198 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a56ff006-b612-4081-b80b-aa31ad27d2c4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667210 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1e00d7-d6e0-43af-9605-ebb7fcb52548-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667219 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.667229 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56ff006-b612-4081-b80b-aa31ad27d2c4-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.784678 4780 generic.go:334] "Generic (PLEG): container finished" podID="a56ff006-b612-4081-b80b-aa31ad27d2c4" containerID="3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e" exitCode=0 Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.784766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" event={"ID":"a56ff006-b612-4081-b80b-aa31ad27d2c4","Type":"ContainerDied","Data":"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e"} Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.784792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" event={"ID":"a56ff006-b612-4081-b80b-aa31ad27d2c4","Type":"ContainerDied","Data":"3c49d19134a714f0283e5610374f7c5365bf3d8b8aea3a917870f7a2ca63ce05"} Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.784825 4780 scope.go:117] "RemoveContainer" containerID="3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.785141 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.788250 4780 generic.go:334] "Generic (PLEG): container finished" podID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" containerID="6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969" exitCode=0 Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.788340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" event={"ID":"0b1e00d7-d6e0-43af-9605-ebb7fcb52548","Type":"ContainerDied","Data":"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969"} Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.788371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" event={"ID":"0b1e00d7-d6e0-43af-9605-ebb7fcb52548","Type":"ContainerDied","Data":"81e5fafaa469121327d9d688a97c997d06f30bad3d47dc0fcfab6911f6515d3f"} Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.788387 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-ngcvj" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.808605 4780 scope.go:117] "RemoveContainer" containerID="3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e" Feb 19 08:26:16 crc kubenswrapper[4780]: E0219 08:26:16.809244 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e\": container with ID starting with 3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e not found: ID does not exist" containerID="3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.809275 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e"} err="failed to get container status \"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e\": rpc error: code = NotFound desc = could not find container \"3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e\": container with ID starting with 3a2ee46879ebd35c9cc55e206e3ca06a2b9e56c2a34690f161d5b974e3e3eb1e not found: ID does not exist" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.809296 4780 scope.go:117] "RemoveContainer" containerID="6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.823058 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.831548 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-tl784"] Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.831972 4780 scope.go:117] "RemoveContainer" containerID="6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969" Feb 19 08:26:16 crc kubenswrapper[4780]: E0219 08:26:16.832716 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969\": container with ID starting with 6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969 not found: ID does not exist" containerID="6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.832851 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969"} err="failed to get container status \"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969\": rpc error: code = NotFound desc = could not find container \"6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969\": container with ID starting with 6f872cecb42525e91cf073649914c71d26f93054e930fc1e2dc04e845d6da969 not found: ID does not exist" Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.833944 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:26:16 crc kubenswrapper[4780]: I0219 08:26:16.838668 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-ngcvj"] Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.794383 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:26:17 crc kubenswrapper[4780]: E0219 08:26:17.796344 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56ff006-b612-4081-b80b-aa31ad27d2c4" containerName="route-controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.796785 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56ff006-b612-4081-b80b-aa31ad27d2c4" containerName="route-controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: E0219 08:26:17.796941 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" containerName="controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.797010 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" containerName="controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.797188 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" containerName="controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.797266 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56ff006-b612-4081-b80b-aa31ad27d2c4" containerName="route-controller-manager" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.797715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.799923 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.802495 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.802660 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.802823 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.802840 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.803373 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.807143 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.808021 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.811981 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.813852 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.814222 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.814431 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.814846 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.814222 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.815165 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.823096 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.825623 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.883947 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.884005 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chrd\" (UniqueName: \"kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.884034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.884393 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.953181 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1e00d7-d6e0-43af-9605-ebb7fcb52548" path="/var/lib/kubelet/pods/0b1e00d7-d6e0-43af-9605-ebb7fcb52548/volumes" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.953863 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56ff006-b612-4081-b80b-aa31ad27d2c4" path="/var/lib/kubelet/pods/a56ff006-b612-4081-b80b-aa31ad27d2c4/volumes" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985467 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92gj\" (UniqueName: \"kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985548 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985689 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985919 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chrd\" (UniqueName: \"kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.985991 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.986031 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.987062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.988709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:17 crc kubenswrapper[4780]: I0219 08:26:17.994164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.006960 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chrd\" (UniqueName: \"kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd\") pod \"route-controller-manager-59b8b66648-mcvdr\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.087239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.087307 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92gj\" (UniqueName: \"kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.087362 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.087422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.087541 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.088986 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.089455 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.090943 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.091804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.106053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92gj\" (UniqueName: \"kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj\") pod \"controller-manager-6b565b4bcd-6f7kd\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.136353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.145440 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.396877 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.550299 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:26:18 crc kubenswrapper[4780]: W0219 08:26:18.558531 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a0e890_a595_46f8_9822_80e3270b6f0f.slice/crio-7db7115afae52780adc4444860d5a85058a90b55382a94f277e2bc9aa5fd8c44 WatchSource:0}: Error finding container 7db7115afae52780adc4444860d5a85058a90b55382a94f277e2bc9aa5fd8c44: Status 404 returned error can't find the container with id 7db7115afae52780adc4444860d5a85058a90b55382a94f277e2bc9aa5fd8c44 Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.826735 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" event={"ID":"a796501f-3442-4790-8740-6d6d68514d64","Type":"ContainerStarted","Data":"13bac1de014514ffc237cfed0ee9c382766a356d86ae2a12fa89885d9fbfefdf"} Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.827014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" event={"ID":"a796501f-3442-4790-8740-6d6d68514d64","Type":"ContainerStarted","Data":"c3c2a57d55b0f52a07e08101f050242e9767eeffa9558711e7db59258b4d82aa"} Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.827433 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.828759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" event={"ID":"e9a0e890-a595-46f8-9822-80e3270b6f0f","Type":"ContainerStarted","Data":"80e18c079c89d17d0299f40d641d7c94f997a810747f4a830bb0c5f5c06c32ca"} Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.828792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" event={"ID":"e9a0e890-a595-46f8-9822-80e3270b6f0f","Type":"ContainerStarted","Data":"7db7115afae52780adc4444860d5a85058a90b55382a94f277e2bc9aa5fd8c44"} Feb 19 08:26:18 crc kubenswrapper[4780]: I0219 08:26:18.846977 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" podStartSLOduration=3.846962677 podStartE2EDuration="3.846962677s" podCreationTimestamp="2026-02-19 08:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:26:18.845405624 +0000 UTC m=+321.589063073" watchObservedRunningTime="2026-02-19 08:26:18.846962677 +0000 UTC m=+321.590620126" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.286361 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.526755 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.527308 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rw27" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="registry-server" containerID="cri-o://62a918f3b7e21f75f29902dfa8034e8fcef812ef115a999062ce1c0ece235506" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.533422 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.533642 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhjsh" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="registry-server" containerID="cri-o://cdec0230b52d175e62a13b0106364993d22c29422278c0f2b7d89eccd63dbf55" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.549041 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.549383 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pknb2" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="registry-server" containerID="cri-o://fe0a0bc4079db73285cf38ed5a12da0079d29764eeff963f6a75630bcf52bd8e" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.563173 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.563432 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" containerID="cri-o://bbb24494e1d07bba67f3ee8c6f2d52ef2e1c38d80a4b734c5d555611d8394260" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.566907 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.567164 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x5jxl" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="registry-server" containerID="cri-o://20de50a66059022f101ea1ee2503910dd26baab8dab8ae56d19038ea231c69ac" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.578175 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.578466 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kbfwg" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" containerID="cri-o://3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" gracePeriod=30 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.587920 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-blt9s"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.588690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.595793 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-blt9s"] Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.707921 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.708009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7847t\" (UniqueName: \"kubernetes.io/projected/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-kube-api-access-7847t\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.708338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.809915 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7847t\" (UniqueName: \"kubernetes.io/projected/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-kube-api-access-7847t\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.810013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.810053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.816324 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.817874 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.832670 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7847t\" (UniqueName: \"kubernetes.io/projected/3bc96cb9-c467-4da1-8aef-8ca0ef0889a4-kube-api-access-7847t\") pod \"marketplace-operator-79b997595-blt9s\" (UID: \"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.840719 4780 generic.go:334] "Generic (PLEG): container finished" podID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerID="fe0a0bc4079db73285cf38ed5a12da0079d29764eeff963f6a75630bcf52bd8e" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.840896 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerDied","Data":"fe0a0bc4079db73285cf38ed5a12da0079d29764eeff963f6a75630bcf52bd8e"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.843068 4780 generic.go:334] "Generic (PLEG): container finished" podID="f841ab7c-b591-480d-8c4a-70003c08e679" containerID="bbb24494e1d07bba67f3ee8c6f2d52ef2e1c38d80a4b734c5d555611d8394260" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.843224 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerDied","Data":"bbb24494e1d07bba67f3ee8c6f2d52ef2e1c38d80a4b734c5d555611d8394260"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.843342 4780 scope.go:117] "RemoveContainer" containerID="2d4a12b15001128752f6ecdaf4ade6b48319237518cb4ed1e2ff07d817fcec2d" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.845543 4780 generic.go:334] "Generic (PLEG): container finished" podID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerID="cdec0230b52d175e62a13b0106364993d22c29422278c0f2b7d89eccd63dbf55" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.845694 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerDied","Data":"cdec0230b52d175e62a13b0106364993d22c29422278c0f2b7d89eccd63dbf55"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.847354 4780 generic.go:334] "Generic (PLEG): container finished" podID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerID="3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.847498 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerDied","Data":"3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.849975 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerID="20de50a66059022f101ea1ee2503910dd26baab8dab8ae56d19038ea231c69ac" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.850109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerDied","Data":"20de50a66059022f101ea1ee2503910dd26baab8dab8ae56d19038ea231c69ac"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.851773 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerID="62a918f3b7e21f75f29902dfa8034e8fcef812ef115a999062ce1c0ece235506" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.852563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerDied","Data":"62a918f3b7e21f75f29902dfa8034e8fcef812ef115a999062ce1c0ece235506"} Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.853560 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.874573 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" podStartSLOduration=4.874555521 podStartE2EDuration="4.874555521s" podCreationTimestamp="2026-02-19 08:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:26:19.870288274 +0000 UTC m=+322.613945733" watchObservedRunningTime="2026-02-19 08:26:19.874555521 +0000 UTC m=+322.618212970" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.887118 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:26:19 crc kubenswrapper[4780]: I0219 08:26:19.904833 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:20 crc kubenswrapper[4780]: E0219 08:26:20.134208 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0 is running failed: container process not found" containerID="3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:26:20 crc kubenswrapper[4780]: E0219 08:26:20.135821 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0 is running failed: container process not found" containerID="3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:26:20 crc kubenswrapper[4780]: E0219 08:26:20.136154 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0 is running failed: container process not found" containerID="3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:26:20 crc kubenswrapper[4780]: E0219 08:26:20.136192 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-kbfwg" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.341950 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.365543 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.366407 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.380321 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.419708 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content\") pod \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.419788 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities\") pod \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.419918 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrpk\" (UniqueName: \"kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk\") pod \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\" (UID: \"ff2c7521-96f7-4727-b14b-537d7b9ead0d\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.420737 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.421035 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities" (OuterVolumeSpecName: "utilities") pod "ff2c7521-96f7-4727-b14b-537d7b9ead0d" (UID: "ff2c7521-96f7-4727-b14b-537d7b9ead0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.428568 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk" (OuterVolumeSpecName: "kube-api-access-thrpk") pod "ff2c7521-96f7-4727-b14b-537d7b9ead0d" (UID: "ff2c7521-96f7-4727-b14b-537d7b9ead0d"). InnerVolumeSpecName "kube-api-access-thrpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.443500 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.478192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff2c7521-96f7-4727-b14b-537d7b9ead0d" (UID: "ff2c7521-96f7-4727-b14b-537d7b9ead0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities\") pod \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521462 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn46g\" (UniqueName: \"kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g\") pod \"2e76db53-981b-4921-bfc0-8bb607700a4c\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521491 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg5mr\" (UniqueName: \"kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr\") pod \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities\") pod \"2e76db53-981b-4921-bfc0-8bb607700a4c\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521571 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content\") pod \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\" (UID: \"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521594 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca\") pod \"f841ab7c-b591-480d-8c4a-70003c08e679\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcwn4\" (UniqueName: \"kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4\") pod \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521671 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content\") pod \"2e76db53-981b-4921-bfc0-8bb607700a4c\" (UID: \"2e76db53-981b-4921-bfc0-8bb607700a4c\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521698 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics\") pod \"f841ab7c-b591-480d-8c4a-70003c08e679\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content\") pod \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521759 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities\") pod \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521794 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content\") pod \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\" (UID: \"f67dbc25-bb01-4883-b25e-c34d66a3b4fe\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfqp2\" (UniqueName: \"kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2\") pod \"f841ab7c-b591-480d-8c4a-70003c08e679\" (UID: \"f841ab7c-b591-480d-8c4a-70003c08e679\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities\") pod \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.521870 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gq4w\" (UniqueName: \"kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w\") pod \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\" (UID: \"4da72063-3969-4d56-b11e-ab1fbbef5b3b\") " Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522089 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522382 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2c7521-96f7-4727-b14b-537d7b9ead0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522393 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrpk\" (UniqueName: \"kubernetes.io/projected/ff2c7521-96f7-4727-b14b-537d7b9ead0d-kube-api-access-thrpk\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522139 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities" (OuterVolumeSpecName: "utilities") pod "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" (UID: "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities" (OuterVolumeSpecName: "utilities") pod "2e76db53-981b-4921-bfc0-8bb607700a4c" (UID: "2e76db53-981b-4921-bfc0-8bb607700a4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.522782 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities" (OuterVolumeSpecName: "utilities") pod "f67dbc25-bb01-4883-b25e-c34d66a3b4fe" (UID: "f67dbc25-bb01-4883-b25e-c34d66a3b4fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.523002 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities" (OuterVolumeSpecName: "utilities") pod "4da72063-3969-4d56-b11e-ab1fbbef5b3b" (UID: "4da72063-3969-4d56-b11e-ab1fbbef5b3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.525593 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w" (OuterVolumeSpecName: "kube-api-access-7gq4w") pod "4da72063-3969-4d56-b11e-ab1fbbef5b3b" (UID: "4da72063-3969-4d56-b11e-ab1fbbef5b3b"). InnerVolumeSpecName "kube-api-access-7gq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.526050 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f841ab7c-b591-480d-8c4a-70003c08e679" (UID: "f841ab7c-b591-480d-8c4a-70003c08e679"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.526265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2" (OuterVolumeSpecName: "kube-api-access-wfqp2") pod "f841ab7c-b591-480d-8c4a-70003c08e679" (UID: "f841ab7c-b591-480d-8c4a-70003c08e679"). InnerVolumeSpecName "kube-api-access-wfqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.526726 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f841ab7c-b591-480d-8c4a-70003c08e679" (UID: "f841ab7c-b591-480d-8c4a-70003c08e679"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.533096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4" (OuterVolumeSpecName: "kube-api-access-pcwn4") pod "f67dbc25-bb01-4883-b25e-c34d66a3b4fe" (UID: "f67dbc25-bb01-4883-b25e-c34d66a3b4fe"). InnerVolumeSpecName "kube-api-access-pcwn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.533329 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g" (OuterVolumeSpecName: "kube-api-access-rn46g") pod "2e76db53-981b-4921-bfc0-8bb607700a4c" (UID: "2e76db53-981b-4921-bfc0-8bb607700a4c"). InnerVolumeSpecName "kube-api-access-rn46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.535207 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr" (OuterVolumeSpecName: "kube-api-access-dg5mr") pod "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" (UID: "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95"). InnerVolumeSpecName "kube-api-access-dg5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.554372 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-blt9s"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.566691 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e76db53-981b-4921-bfc0-8bb607700a4c" (UID: "2e76db53-981b-4921-bfc0-8bb607700a4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.593729 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" (UID: "4cf0cb94-4d97-4c5c-bfbc-272acbda2b95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.604944 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4da72063-3969-4d56-b11e-ab1fbbef5b3b" (UID: "4da72063-3969-4d56-b11e-ab1fbbef5b3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623328 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn46g\" (UniqueName: \"kubernetes.io/projected/2e76db53-981b-4921-bfc0-8bb607700a4c-kube-api-access-rn46g\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623357 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg5mr\" (UniqueName: \"kubernetes.io/projected/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-kube-api-access-dg5mr\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623368 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623379 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623394 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623434 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcwn4\" (UniqueName: \"kubernetes.io/projected/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-kube-api-access-pcwn4\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623445 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e76db53-981b-4921-bfc0-8bb607700a4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623457 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f841ab7c-b591-480d-8c4a-70003c08e679-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623468 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623477 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623486 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfqp2\" (UniqueName: \"kubernetes.io/projected/f841ab7c-b591-480d-8c4a-70003c08e679-kube-api-access-wfqp2\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623496 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da72063-3969-4d56-b11e-ab1fbbef5b3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623507 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gq4w\" (UniqueName: \"kubernetes.io/projected/4da72063-3969-4d56-b11e-ab1fbbef5b3b-kube-api-access-7gq4w\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.623515 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.678550 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f67dbc25-bb01-4883-b25e-c34d66a3b4fe" (UID: "f67dbc25-bb01-4883-b25e-c34d66a3b4fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.724566 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67dbc25-bb01-4883-b25e-c34d66a3b4fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.860006 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rw27" event={"ID":"ff2c7521-96f7-4727-b14b-537d7b9ead0d","Type":"ContainerDied","Data":"2c186351b70c31ab23c389c109a9b0e8c66b73635182e05035919d3f2fdf424f"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.860022 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rw27" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.860069 4780 scope.go:117] "RemoveContainer" containerID="62a918f3b7e21f75f29902dfa8034e8fcef812ef115a999062ce1c0ece235506" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.863115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pknb2" event={"ID":"4cf0cb94-4d97-4c5c-bfbc-272acbda2b95","Type":"ContainerDied","Data":"3e7413e4585cd7f87788234b3a4f9d9cb083d1be23c99f16cffc6a56bbfc3a42"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.863310 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pknb2" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.869054 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.869063 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwdw8" event={"ID":"f841ab7c-b591-480d-8c4a-70003c08e679","Type":"ContainerDied","Data":"b8a0def4d9f2f147e0cbf871582df300d2785329b92e8d487c56e0a27470a072"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.873481 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhjsh" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.876206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhjsh" event={"ID":"4da72063-3969-4d56-b11e-ab1fbbef5b3b","Type":"ContainerDied","Data":"eb4b69e77d43f22092bdbc8a08288df110d0a0fc1fb3ef23eaeaefd557b0c4c3"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.878686 4780 scope.go:117] "RemoveContainer" containerID="b19ee7e34d3a8cbef9bc8e00b718b33f7b5b6de36442de60f57f2266ef324ce8" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.879797 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbfwg" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.879896 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbfwg" event={"ID":"f67dbc25-bb01-4883-b25e-c34d66a3b4fe","Type":"ContainerDied","Data":"42d9bfbfc42b9e97673368c0c390f2fb8fb76edd64bc30fbe1cedf34a89daae8"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.884288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5jxl" event={"ID":"2e76db53-981b-4921-bfc0-8bb607700a4c","Type":"ContainerDied","Data":"eb2b335c76e1ae4c58035e4a0aedc95d4a8645c61eec7db5ef46174d3cd91e13"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.884672 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5jxl" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.892010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" event={"ID":"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4","Type":"ContainerStarted","Data":"f93d5d8a51de5d75e0eb9537439a8c4df7a70c31fe964c5472665a4bf5ea5e13"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.892049 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" event={"ID":"3bc96cb9-c467-4da1-8aef-8ca0ef0889a4","Type":"ContainerStarted","Data":"775ac9e2527c9570238f28051df267564f56f7f46d00761f31a424fb054f94c7"} Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.892551 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.893611 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-blt9s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.893701 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" podUID="3bc96cb9-c467-4da1-8aef-8ca0ef0889a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.910301 4780 scope.go:117] "RemoveContainer" containerID="f3af6dcd3d36693463e9ba769f710adfe5d9fbbe27cd0332cb1ed4393498540a" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.915790 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.930345 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pknb2"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.938259 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.940155 4780 scope.go:117] "RemoveContainer" containerID="fe0a0bc4079db73285cf38ed5a12da0079d29764eeff963f6a75630bcf52bd8e" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.947204 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rw27"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.952156 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.960324 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwdw8"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.962175 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" podStartSLOduration=1.962153603 podStartE2EDuration="1.962153603s" podCreationTimestamp="2026-02-19 08:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:26:20.94202439 +0000 UTC m=+323.685681839" watchObservedRunningTime="2026-02-19 08:26:20.962153603 +0000 UTC m=+323.705811052" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.966715 4780 scope.go:117] "RemoveContainer" containerID="d5d5701f8ce3fb753fa73c4f31ce3fa4b3df2ead4822d0ad574cc081bde55356" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.974878 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.981076 4780 scope.go:117] "RemoveContainer" containerID="e199bcb73ef6d8527dabba3f3a8c48301a238469824cda46c29d749b722e42b7" Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.981325 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kbfwg"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.988891 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.993333 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5jxl"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.995847 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:26:20 crc kubenswrapper[4780]: I0219 08:26:20.999086 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhjsh"] Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.001745 4780 scope.go:117] "RemoveContainer" containerID="bbb24494e1d07bba67f3ee8c6f2d52ef2e1c38d80a4b734c5d555611d8394260" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.038323 4780 scope.go:117] "RemoveContainer" containerID="cdec0230b52d175e62a13b0106364993d22c29422278c0f2b7d89eccd63dbf55" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.076948 4780 scope.go:117] "RemoveContainer" containerID="19ca5fa236b0864cf4caad14ef6802f775fea98b21a0d116bbfc1fe0eb876a81" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.094436 4780 scope.go:117] "RemoveContainer" containerID="c6536cbdddfeb3b684afe186c5c36146db1f993993a4cfeddf9bb68c3b23b48b" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.116507 4780 scope.go:117] "RemoveContainer" containerID="3969bedee92fc73f005d38d0e5179bac50efdc991b4a8c883b62222b9ff1b8f0" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.136835 4780 scope.go:117] "RemoveContainer" containerID="939603631d36564e1f3fd13d81896c7c0a336e7348d3bdb631bb6f427b5a3663" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.160730 4780 scope.go:117] "RemoveContainer" containerID="570c85979494d11e33cc2a8fb4961012f3eadef6a2559282f24fb7dc4157c0f2" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.178171 4780 scope.go:117] "RemoveContainer" containerID="20de50a66059022f101ea1ee2503910dd26baab8dab8ae56d19038ea231c69ac" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.202249 4780 scope.go:117] "RemoveContainer" containerID="07ab2454109977801b84f46c1ce805f08ec5a56154f59ae4156f0b697fc19364" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.216556 4780 scope.go:117] "RemoveContainer" containerID="330dfbb0264691393baca75d382991a28e298ac310ea079973f51697bcda76f5" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.908948 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-blt9s" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.944875 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" path="/var/lib/kubelet/pods/2e76db53-981b-4921-bfc0-8bb607700a4c/volumes" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.945590 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" path="/var/lib/kubelet/pods/4cf0cb94-4d97-4c5c-bfbc-272acbda2b95/volumes" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.946214 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" path="/var/lib/kubelet/pods/4da72063-3969-4d56-b11e-ab1fbbef5b3b/volumes" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.947571 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" path="/var/lib/kubelet/pods/f67dbc25-bb01-4883-b25e-c34d66a3b4fe/volumes" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.949227 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" path="/var/lib/kubelet/pods/f841ab7c-b591-480d-8c4a-70003c08e679/volumes" Feb 19 08:26:21 crc kubenswrapper[4780]: I0219 08:26:21.950738 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" path="/var/lib/kubelet/pods/ff2c7521-96f7-4727-b14b-537d7b9ead0d/volumes" Feb 19 08:26:36 crc kubenswrapper[4780]: I0219 08:26:36.336024 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:26:36 crc kubenswrapper[4780]: I0219 08:26:36.336575 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:27:06 crc kubenswrapper[4780]: I0219 08:27:06.336888 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:27:06 crc kubenswrapper[4780]: I0219 08:27:06.338110 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.039841 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040239 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040262 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040448 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040473 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040504 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040526 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040548 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040566 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040588 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040605 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040633 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040650 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040676 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040693 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040710 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040728 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040750 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040767 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040786 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040802 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040830 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040847 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040871 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040887 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040908 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040925 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="extract-content" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040947 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.040964 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.040984 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041002 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.041024 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041041 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="extract-utilities" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041288 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041313 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041333 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67dbc25-bb01-4883-b25e-c34d66a3b4fe" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041354 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf0cb94-4d97-4c5c-bfbc-272acbda2b95" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041374 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e76db53-981b-4921-bfc0-8bb607700a4c" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041387 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da72063-3969-4d56-b11e-ab1fbbef5b3b" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041404 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2c7521-96f7-4727-b14b-537d7b9ead0d" containerName="registry-server" Feb 19 08:27:07 crc kubenswrapper[4780]: E0219 08:27:07.041617 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.041641 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f841ab7c-b591-480d-8c4a-70003c08e679" containerName="marketplace-operator" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.042635 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.045997 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.050805 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.180248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.180313 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.180362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65pj\" (UniqueName: \"kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.224185 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9h8d"] Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.225402 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.227868 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.247078 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9h8d"] Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.282550 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.283005 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.283244 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.283491 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65pj\" (UniqueName: \"kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.283576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.308515 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65pj\" (UniqueName: \"kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj\") pod \"certified-operators-h4pmx\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.375871 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.385071 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s525\" (UniqueName: \"kubernetes.io/projected/a253deca-98c7-4fa6-a416-c2f951e824f0-kube-api-access-7s525\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.385300 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-catalog-content\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.385983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-utilities\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.487832 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-utilities\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.487910 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s525\" (UniqueName: \"kubernetes.io/projected/a253deca-98c7-4fa6-a416-c2f951e824f0-kube-api-access-7s525\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.487948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-catalog-content\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.488587 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-catalog-content\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.488942 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a253deca-98c7-4fa6-a416-c2f951e824f0-utilities\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.517048 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s525\" (UniqueName: \"kubernetes.io/projected/a253deca-98c7-4fa6-a416-c2f951e824f0-kube-api-access-7s525\") pod \"community-operators-f9h8d\" (UID: \"a253deca-98c7-4fa6-a416-c2f951e824f0\") " pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.554735 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.849967 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 08:27:07 crc kubenswrapper[4780]: I0219 08:27:07.967889 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9h8d"] Feb 19 08:27:07 crc kubenswrapper[4780]: W0219 08:27:07.979968 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda253deca_98c7_4fa6_a416_c2f951e824f0.slice/crio-ee511f1be6870d392f63202c49000f4c750a2e98b2729b08c5afe6fa49af9995 WatchSource:0}: Error finding container ee511f1be6870d392f63202c49000f4c750a2e98b2729b08c5afe6fa49af9995: Status 404 returned error can't find the container with id ee511f1be6870d392f63202c49000f4c750a2e98b2729b08c5afe6fa49af9995 Feb 19 08:27:08 crc kubenswrapper[4780]: I0219 08:27:08.184543 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9h8d" event={"ID":"a253deca-98c7-4fa6-a416-c2f951e824f0","Type":"ContainerStarted","Data":"ee511f1be6870d392f63202c49000f4c750a2e98b2729b08c5afe6fa49af9995"} Feb 19 08:27:08 crc kubenswrapper[4780]: I0219 08:27:08.186755 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerID="a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1" exitCode=0 Feb 19 08:27:08 crc kubenswrapper[4780]: I0219 08:27:08.186816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerDied","Data":"a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1"} Feb 19 08:27:08 crc kubenswrapper[4780]: I0219 08:27:08.186877 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerStarted","Data":"88918b73b72f067780d32774b7229928fe09241e236fce13cb9e625344af7e5f"} Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.198538 4780 generic.go:334] "Generic (PLEG): container finished" podID="a253deca-98c7-4fa6-a416-c2f951e824f0" containerID="47a0e260eb3555328e2b0d56d0357438ddfaf4d88b9a7631c9e8946a79092f04" exitCode=0 Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.198627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9h8d" event={"ID":"a253deca-98c7-4fa6-a416-c2f951e824f0","Type":"ContainerDied","Data":"47a0e260eb3555328e2b0d56d0357438ddfaf4d88b9a7631c9e8946a79092f04"} Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.428223 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmbhx"] Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.429233 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.431728 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.448053 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmbhx"] Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.618913 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c7svd"] Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.619857 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.620515 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sftl\" (UniqueName: \"kubernetes.io/projected/c95adfb8-f3a6-4b61-a38d-258dc4528d47-kube-api-access-9sftl\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.620865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-utilities\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.620944 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-catalog-content\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.621837 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.631544 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7svd"] Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.722910 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-utilities\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723011 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-catalog-content\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sftl\" (UniqueName: \"kubernetes.io/projected/c95adfb8-f3a6-4b61-a38d-258dc4528d47-kube-api-access-9sftl\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723100 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-utilities\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-catalog-content\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723180 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqnr\" (UniqueName: \"kubernetes.io/projected/5b287375-fa66-41f5-a4a6-d5b540e56b4b-kube-api-access-knqnr\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723637 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-catalog-content\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.723725 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95adfb8-f3a6-4b61-a38d-258dc4528d47-utilities\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.743363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sftl\" (UniqueName: \"kubernetes.io/projected/c95adfb8-f3a6-4b61-a38d-258dc4528d47-kube-api-access-9sftl\") pod \"redhat-marketplace-qmbhx\" (UID: \"c95adfb8-f3a6-4b61-a38d-258dc4528d47\") " pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.743727 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.824166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-catalog-content\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.824391 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqnr\" (UniqueName: \"kubernetes.io/projected/5b287375-fa66-41f5-a4a6-d5b540e56b4b-kube-api-access-knqnr\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.824423 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-utilities\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.824739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-catalog-content\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.824774 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b287375-fa66-41f5-a4a6-d5b540e56b4b-utilities\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.846501 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqnr\" (UniqueName: \"kubernetes.io/projected/5b287375-fa66-41f5-a4a6-d5b540e56b4b-kube-api-access-knqnr\") pod \"redhat-operators-c7svd\" (UID: \"5b287375-fa66-41f5-a4a6-d5b540e56b4b\") " pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:09 crc kubenswrapper[4780]: I0219 08:27:09.933955 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.128930 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmbhx"] Feb 19 08:27:10 crc kubenswrapper[4780]: W0219 08:27:10.136930 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95adfb8_f3a6_4b61_a38d_258dc4528d47.slice/crio-9408986cc0d7fd42c351fd3871c6ae639515b31bab8b01c507979ca69b14a3e9 WatchSource:0}: Error finding container 9408986cc0d7fd42c351fd3871c6ae639515b31bab8b01c507979ca69b14a3e9: Status 404 returned error can't find the container with id 9408986cc0d7fd42c351fd3871c6ae639515b31bab8b01c507979ca69b14a3e9 Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.216947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9h8d" event={"ID":"a253deca-98c7-4fa6-a416-c2f951e824f0","Type":"ContainerStarted","Data":"7c4854cbfe9d28747c25d610d647a1446c2924f30bb99e17b1d3e29217eb4bba"} Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.220719 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmbhx" event={"ID":"c95adfb8-f3a6-4b61-a38d-258dc4528d47","Type":"ContainerStarted","Data":"9408986cc0d7fd42c351fd3871c6ae639515b31bab8b01c507979ca69b14a3e9"} Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.224472 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerID="df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85" exitCode=0 Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.224526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerDied","Data":"df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85"} Feb 19 08:27:10 crc kubenswrapper[4780]: I0219 08:27:10.320779 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7svd"] Feb 19 08:27:10 crc kubenswrapper[4780]: W0219 08:27:10.389663 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b287375_fa66_41f5_a4a6_d5b540e56b4b.slice/crio-ee2b10fe58e34d6577b1f35601b6042f76a159aaa35471f4d8c7ce2adf9dee0c WatchSource:0}: Error finding container ee2b10fe58e34d6577b1f35601b6042f76a159aaa35471f4d8c7ce2adf9dee0c: Status 404 returned error can't find the container with id ee2b10fe58e34d6577b1f35601b6042f76a159aaa35471f4d8c7ce2adf9dee0c Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.231417 4780 generic.go:334] "Generic (PLEG): container finished" podID="c95adfb8-f3a6-4b61-a38d-258dc4528d47" containerID="a0c3532a69e80efa2028f644c7896afd0bda61a80168088178374e1f5ebfe7f0" exitCode=0 Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.231745 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmbhx" event={"ID":"c95adfb8-f3a6-4b61-a38d-258dc4528d47","Type":"ContainerDied","Data":"a0c3532a69e80efa2028f644c7896afd0bda61a80168088178374e1f5ebfe7f0"} Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.233556 4780 generic.go:334] "Generic (PLEG): container finished" podID="5b287375-fa66-41f5-a4a6-d5b540e56b4b" containerID="d161a1968ad180c92b9fa719e721981504b194e4d5a6ce02d1d7521f89d4d97d" exitCode=0 Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.233728 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7svd" event={"ID":"5b287375-fa66-41f5-a4a6-d5b540e56b4b","Type":"ContainerDied","Data":"d161a1968ad180c92b9fa719e721981504b194e4d5a6ce02d1d7521f89d4d97d"} Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.233752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7svd" event={"ID":"5b287375-fa66-41f5-a4a6-d5b540e56b4b","Type":"ContainerStarted","Data":"ee2b10fe58e34d6577b1f35601b6042f76a159aaa35471f4d8c7ce2adf9dee0c"} Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.236747 4780 generic.go:334] "Generic (PLEG): container finished" podID="a253deca-98c7-4fa6-a416-c2f951e824f0" containerID="7c4854cbfe9d28747c25d610d647a1446c2924f30bb99e17b1d3e29217eb4bba" exitCode=0 Feb 19 08:27:11 crc kubenswrapper[4780]: I0219 08:27:11.236816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9h8d" event={"ID":"a253deca-98c7-4fa6-a416-c2f951e824f0","Type":"ContainerDied","Data":"7c4854cbfe9d28747c25d610d647a1446c2924f30bb99e17b1d3e29217eb4bba"} Feb 19 08:27:12 crc kubenswrapper[4780]: I0219 08:27:12.243847 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9h8d" event={"ID":"a253deca-98c7-4fa6-a416-c2f951e824f0","Type":"ContainerStarted","Data":"95d7ae199f185cdfd693bd0856247e6ed0a1ac424ce89e24f0733e230453c20e"} Feb 19 08:27:12 crc kubenswrapper[4780]: I0219 08:27:12.250960 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerStarted","Data":"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0"} Feb 19 08:27:12 crc kubenswrapper[4780]: I0219 08:27:12.271071 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9h8d" podStartSLOduration=2.536643407 podStartE2EDuration="5.271051763s" podCreationTimestamp="2026-02-19 08:27:07 +0000 UTC" firstStartedPulling="2026-02-19 08:27:09.221025543 +0000 UTC m=+371.964683032" lastFinishedPulling="2026-02-19 08:27:11.955433939 +0000 UTC m=+374.699091388" observedRunningTime="2026-02-19 08:27:12.269712434 +0000 UTC m=+375.013369893" watchObservedRunningTime="2026-02-19 08:27:12.271051763 +0000 UTC m=+375.014709212" Feb 19 08:27:12 crc kubenswrapper[4780]: I0219 08:27:12.286359 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4pmx" podStartSLOduration=2.326476808 podStartE2EDuration="5.28633982s" podCreationTimestamp="2026-02-19 08:27:07 +0000 UTC" firstStartedPulling="2026-02-19 08:27:08.188705226 +0000 UTC m=+370.932362675" lastFinishedPulling="2026-02-19 08:27:11.148568218 +0000 UTC m=+373.892225687" observedRunningTime="2026-02-19 08:27:12.284479787 +0000 UTC m=+375.028137236" watchObservedRunningTime="2026-02-19 08:27:12.28633982 +0000 UTC m=+375.029997269" Feb 19 08:27:13 crc kubenswrapper[4780]: I0219 08:27:13.264700 4780 generic.go:334] "Generic (PLEG): container finished" podID="5b287375-fa66-41f5-a4a6-d5b540e56b4b" containerID="4bf7f48f8b3c8957fa62cc40c72c97f6c71b3e002c664d7a5aa1314d7d412cda" exitCode=0 Feb 19 08:27:13 crc kubenswrapper[4780]: I0219 08:27:13.264778 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7svd" event={"ID":"5b287375-fa66-41f5-a4a6-d5b540e56b4b","Type":"ContainerDied","Data":"4bf7f48f8b3c8957fa62cc40c72c97f6c71b3e002c664d7a5aa1314d7d412cda"} Feb 19 08:27:13 crc kubenswrapper[4780]: I0219 08:27:13.270505 4780 generic.go:334] "Generic (PLEG): container finished" podID="c95adfb8-f3a6-4b61-a38d-258dc4528d47" containerID="bff734c3f1c9012e885a6dbe004b22ad473512ad92e93e2aabba38f452593c95" exitCode=0 Feb 19 08:27:13 crc kubenswrapper[4780]: I0219 08:27:13.271833 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmbhx" event={"ID":"c95adfb8-f3a6-4b61-a38d-258dc4528d47","Type":"ContainerDied","Data":"bff734c3f1c9012e885a6dbe004b22ad473512ad92e93e2aabba38f452593c95"} Feb 19 08:27:14 crc kubenswrapper[4780]: I0219 08:27:14.277503 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7svd" event={"ID":"5b287375-fa66-41f5-a4a6-d5b540e56b4b","Type":"ContainerStarted","Data":"43d6bb1079536e1e38682950674022559cc69f6a9c2835a8788d798beba4c51e"} Feb 19 08:27:14 crc kubenswrapper[4780]: I0219 08:27:14.279306 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmbhx" event={"ID":"c95adfb8-f3a6-4b61-a38d-258dc4528d47","Type":"ContainerStarted","Data":"da2518c1e8a377384394ffb860220b365eec9c8b2fee49c28d2819d598473087"} Feb 19 08:27:14 crc kubenswrapper[4780]: I0219 08:27:14.304920 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c7svd" podStartSLOduration=2.886641553 podStartE2EDuration="5.304905788s" podCreationTimestamp="2026-02-19 08:27:09 +0000 UTC" firstStartedPulling="2026-02-19 08:27:11.235540215 +0000 UTC m=+373.979197664" lastFinishedPulling="2026-02-19 08:27:13.65380443 +0000 UTC m=+376.397461899" observedRunningTime="2026-02-19 08:27:14.302099767 +0000 UTC m=+377.045757216" watchObservedRunningTime="2026-02-19 08:27:14.304905788 +0000 UTC m=+377.048563237" Feb 19 08:27:15 crc kubenswrapper[4780]: I0219 08:27:15.825025 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmbhx" podStartSLOduration=4.059837186 podStartE2EDuration="6.825006291s" podCreationTimestamp="2026-02-19 08:27:09 +0000 UTC" firstStartedPulling="2026-02-19 08:27:11.233513167 +0000 UTC m=+373.977170626" lastFinishedPulling="2026-02-19 08:27:13.998682242 +0000 UTC m=+376.742339731" observedRunningTime="2026-02-19 08:27:14.325152356 +0000 UTC m=+377.068809805" watchObservedRunningTime="2026-02-19 08:27:15.825006291 +0000 UTC m=+378.568663760" Feb 19 08:27:15 crc kubenswrapper[4780]: I0219 08:27:15.826589 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:27:15 crc kubenswrapper[4780]: I0219 08:27:15.826811 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" podUID="e9a0e890-a595-46f8-9822-80e3270b6f0f" containerName="controller-manager" containerID="cri-o://80e18c079c89d17d0299f40d641d7c94f997a810747f4a830bb0c5f5c06c32ca" gracePeriod=30 Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.290338 4780 generic.go:334] "Generic (PLEG): container finished" podID="e9a0e890-a595-46f8-9822-80e3270b6f0f" containerID="80e18c079c89d17d0299f40d641d7c94f997a810747f4a830bb0c5f5c06c32ca" exitCode=0 Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.290560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" event={"ID":"e9a0e890-a595-46f8-9822-80e3270b6f0f","Type":"ContainerDied","Data":"80e18c079c89d17d0299f40d641d7c94f997a810747f4a830bb0c5f5c06c32ca"} Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.745001 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.820476 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles\") pod \"e9a0e890-a595-46f8-9822-80e3270b6f0f\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.820886 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca\") pod \"e9a0e890-a595-46f8-9822-80e3270b6f0f\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.821063 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92gj\" (UniqueName: \"kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj\") pod \"e9a0e890-a595-46f8-9822-80e3270b6f0f\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.821279 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config\") pod \"e9a0e890-a595-46f8-9822-80e3270b6f0f\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.821381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e9a0e890-a595-46f8-9822-80e3270b6f0f" (UID: "e9a0e890-a595-46f8-9822-80e3270b6f0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.821495 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9a0e890-a595-46f8-9822-80e3270b6f0f" (UID: "e9a0e890-a595-46f8-9822-80e3270b6f0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.821916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config" (OuterVolumeSpecName: "config") pod "e9a0e890-a595-46f8-9822-80e3270b6f0f" (UID: "e9a0e890-a595-46f8-9822-80e3270b6f0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.822381 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert\") pod \"e9a0e890-a595-46f8-9822-80e3270b6f0f\" (UID: \"e9a0e890-a595-46f8-9822-80e3270b6f0f\") " Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.823184 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.823301 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.823386 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a0e890-a595-46f8-9822-80e3270b6f0f-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.826152 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj" (OuterVolumeSpecName: "kube-api-access-r92gj") pod "e9a0e890-a595-46f8-9822-80e3270b6f0f" (UID: "e9a0e890-a595-46f8-9822-80e3270b6f0f"). InnerVolumeSpecName "kube-api-access-r92gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.827198 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9a0e890-a595-46f8-9822-80e3270b6f0f" (UID: "e9a0e890-a595-46f8-9822-80e3270b6f0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.924477 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a0e890-a595-46f8-9822-80e3270b6f0f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[4780]: I0219 08:27:16.924520 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92gj\" (UniqueName: \"kubernetes.io/projected/e9a0e890-a595-46f8-9822-80e3270b6f0f-kube-api-access-r92gj\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.298483 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" event={"ID":"e9a0e890-a595-46f8-9822-80e3270b6f0f","Type":"ContainerDied","Data":"7db7115afae52780adc4444860d5a85058a90b55382a94f277e2bc9aa5fd8c44"} Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.298530 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.298539 4780 scope.go:117] "RemoveContainer" containerID="80e18c079c89d17d0299f40d641d7c94f997a810747f4a830bb0c5f5c06c32ca" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.325904 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.328905 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-6f7kd"] Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.376740 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.377025 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.422835 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.554896 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.554951 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.590399 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.825291 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-jzzvn"] Feb 19 08:27:17 crc kubenswrapper[4780]: E0219 08:27:17.825502 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a0e890-a595-46f8-9822-80e3270b6f0f" containerName="controller-manager" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.825516 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a0e890-a595-46f8-9822-80e3270b6f0f" containerName="controller-manager" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.825615 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a0e890-a595-46f8-9822-80e3270b6f0f" containerName="controller-manager" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.826039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.828984 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.830254 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.830370 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.831164 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.831725 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.831958 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.839668 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91497950-ec6d-486d-803e-1cdbd8e746ea-serving-cert\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.839813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.839949 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-config\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.840058 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-client-ca\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.840216 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7jd\" (UniqueName: \"kubernetes.io/projected/91497950-ec6d-486d-803e-1cdbd8e746ea-kube-api-access-bh7jd\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.840997 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.844398 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-jzzvn"] Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.941389 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.942593 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-config\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.942706 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-client-ca\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.942819 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-proxy-ca-bundles\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.942818 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7jd\" (UniqueName: \"kubernetes.io/projected/91497950-ec6d-486d-803e-1cdbd8e746ea-kube-api-access-bh7jd\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.942911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91497950-ec6d-486d-803e-1cdbd8e746ea-serving-cert\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.943935 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-client-ca\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.944865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91497950-ec6d-486d-803e-1cdbd8e746ea-config\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.947375 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a0e890-a595-46f8-9822-80e3270b6f0f" path="/var/lib/kubelet/pods/e9a0e890-a595-46f8-9822-80e3270b6f0f/volumes" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.950384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91497950-ec6d-486d-803e-1cdbd8e746ea-serving-cert\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:17 crc kubenswrapper[4780]: I0219 08:27:17.959568 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7jd\" (UniqueName: \"kubernetes.io/projected/91497950-ec6d-486d-803e-1cdbd8e746ea-kube-api-access-bh7jd\") pod \"controller-manager-5594b7978f-jzzvn\" (UID: \"91497950-ec6d-486d-803e-1cdbd8e746ea\") " pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:18 crc kubenswrapper[4780]: I0219 08:27:18.144693 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:18 crc kubenswrapper[4780]: I0219 08:27:18.353702 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9h8d" Feb 19 08:27:18 crc kubenswrapper[4780]: I0219 08:27:18.356611 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 08:27:18 crc kubenswrapper[4780]: I0219 08:27:18.561421 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594b7978f-jzzvn"] Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.318185 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" event={"ID":"91497950-ec6d-486d-803e-1cdbd8e746ea","Type":"ContainerStarted","Data":"1d9b469d2729a765bac527356c4a487680ee79a984d63a7e2f5dd1ed85f41a35"} Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.318677 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" event={"ID":"91497950-ec6d-486d-803e-1cdbd8e746ea","Type":"ContainerStarted","Data":"79ef6c30be7b7ea80b1bbe61fb98f46f59978e3fd7d7117c166b6939c1dced76"} Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.744777 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.744828 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.787964 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.934289 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:19 crc kubenswrapper[4780]: I0219 08:27:19.934338 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:20 crc kubenswrapper[4780]: I0219 08:27:20.001302 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:20 crc kubenswrapper[4780]: I0219 08:27:20.347413 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" podStartSLOduration=5.347392341 podStartE2EDuration="5.347392341s" podCreationTimestamp="2026-02-19 08:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:20.343402027 +0000 UTC m=+383.087059496" watchObservedRunningTime="2026-02-19 08:27:20.347392341 +0000 UTC m=+383.091049800" Feb 19 08:27:20 crc kubenswrapper[4780]: I0219 08:27:20.376184 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c7svd" Feb 19 08:27:20 crc kubenswrapper[4780]: I0219 08:27:20.384448 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmbhx" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.893573 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gtzf6"] Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.894289 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.911204 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gtzf6"] Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934530 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4967ca5f-15a8-4a01-841a-c89bd800470f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934592 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4967ca5f-15a8-4a01-841a-c89bd800470f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934618 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-tls\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934636 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqw8\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-kube-api-access-rnqw8\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934670 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-trusted-ca\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-bound-sa-token\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934716 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.934752 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-certificates\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:22 crc kubenswrapper[4780]: I0219 08:27:22.960475 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.035778 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4967ca5f-15a8-4a01-841a-c89bd800470f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.035861 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-tls\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036284 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4967ca5f-15a8-4a01-841a-c89bd800470f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqw8\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-kube-api-access-rnqw8\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036582 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-trusted-ca\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036607 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-bound-sa-token\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-certificates\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.036734 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4967ca5f-15a8-4a01-841a-c89bd800470f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.038257 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-certificates\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.038464 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4967ca5f-15a8-4a01-841a-c89bd800470f-trusted-ca\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.041747 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4967ca5f-15a8-4a01-841a-c89bd800470f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.045034 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-registry-tls\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.054922 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-bound-sa-token\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.058991 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqw8\" (UniqueName: \"kubernetes.io/projected/4967ca5f-15a8-4a01-841a-c89bd800470f-kube-api-access-rnqw8\") pod \"image-registry-66df7c8f76-gtzf6\" (UID: \"4967ca5f-15a8-4a01-841a-c89bd800470f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.216478 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:23 crc kubenswrapper[4780]: I0219 08:27:23.631894 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gtzf6"] Feb 19 08:27:23 crc kubenswrapper[4780]: W0219 08:27:23.642572 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4967ca5f_15a8_4a01_841a_c89bd800470f.slice/crio-d3c4ba54dc7f0ad95c84ceef62a0c301aec89a629ea9e32d49bc74902646baa3 WatchSource:0}: Error finding container d3c4ba54dc7f0ad95c84ceef62a0c301aec89a629ea9e32d49bc74902646baa3: Status 404 returned error can't find the container with id d3c4ba54dc7f0ad95c84ceef62a0c301aec89a629ea9e32d49bc74902646baa3 Feb 19 08:27:24 crc kubenswrapper[4780]: I0219 08:27:24.364409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" event={"ID":"4967ca5f-15a8-4a01-841a-c89bd800470f","Type":"ContainerStarted","Data":"d3c4ba54dc7f0ad95c84ceef62a0c301aec89a629ea9e32d49bc74902646baa3"} Feb 19 08:27:26 crc kubenswrapper[4780]: I0219 08:27:26.379096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" event={"ID":"4967ca5f-15a8-4a01-841a-c89bd800470f","Type":"ContainerStarted","Data":"795c7189cf73ca378b33cf11b4277550ba0754aab2a58bcaea88397e10ca4331"} Feb 19 08:27:27 crc kubenswrapper[4780]: I0219 08:27:27.385942 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:27 crc kubenswrapper[4780]: I0219 08:27:27.409408 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" podStartSLOduration=5.409388893 podStartE2EDuration="5.409388893s" podCreationTimestamp="2026-02-19 08:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:27.408186759 +0000 UTC m=+390.151844238" watchObservedRunningTime="2026-02-19 08:27:27.409388893 +0000 UTC m=+390.153046352" Feb 19 08:27:28 crc kubenswrapper[4780]: I0219 08:27:28.145471 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:28 crc kubenswrapper[4780]: I0219 08:27:28.156034 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5594b7978f-jzzvn" Feb 19 08:27:35 crc kubenswrapper[4780]: I0219 08:27:35.842139 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:27:35 crc kubenswrapper[4780]: I0219 08:27:35.843003 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" podUID="a796501f-3442-4790-8740-6d6d68514d64" containerName="route-controller-manager" containerID="cri-o://13bac1de014514ffc237cfed0ee9c382766a356d86ae2a12fa89885d9fbfefdf" gracePeriod=30 Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.336268 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.336359 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.336428 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.337217 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.337316 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0" gracePeriod=600 Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.442465 4780 generic.go:334] "Generic (PLEG): container finished" podID="a796501f-3442-4790-8740-6d6d68514d64" containerID="13bac1de014514ffc237cfed0ee9c382766a356d86ae2a12fa89885d9fbfefdf" exitCode=0 Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.442540 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" event={"ID":"a796501f-3442-4790-8740-6d6d68514d64","Type":"ContainerDied","Data":"13bac1de014514ffc237cfed0ee9c382766a356d86ae2a12fa89885d9fbfefdf"} Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.838966 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.860704 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca\") pod \"a796501f-3442-4790-8740-6d6d68514d64\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.861608 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config\") pod \"a796501f-3442-4790-8740-6d6d68514d64\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.861646 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8chrd\" (UniqueName: \"kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd\") pod \"a796501f-3442-4790-8740-6d6d68514d64\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.861688 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert\") pod \"a796501f-3442-4790-8740-6d6d68514d64\" (UID: \"a796501f-3442-4790-8740-6d6d68514d64\") " Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.862031 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca" (OuterVolumeSpecName: "client-ca") pod "a796501f-3442-4790-8740-6d6d68514d64" (UID: "a796501f-3442-4790-8740-6d6d68514d64"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.862841 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config" (OuterVolumeSpecName: "config") pod "a796501f-3442-4790-8740-6d6d68514d64" (UID: "a796501f-3442-4790-8740-6d6d68514d64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.864464 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.864492 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a796501f-3442-4790-8740-6d6d68514d64-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.866841 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd" (OuterVolumeSpecName: "kube-api-access-8chrd") pod "a796501f-3442-4790-8740-6d6d68514d64" (UID: "a796501f-3442-4790-8740-6d6d68514d64"). InnerVolumeSpecName "kube-api-access-8chrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.866994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a796501f-3442-4790-8740-6d6d68514d64" (UID: "a796501f-3442-4790-8740-6d6d68514d64"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.965727 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8chrd\" (UniqueName: \"kubernetes.io/projected/a796501f-3442-4790-8740-6d6d68514d64-kube-api-access-8chrd\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:36 crc kubenswrapper[4780]: I0219 08:27:36.965764 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a796501f-3442-4790-8740-6d6d68514d64-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.450063 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" event={"ID":"a796501f-3442-4790-8740-6d6d68514d64","Type":"ContainerDied","Data":"c3c2a57d55b0f52a07e08101f050242e9767eeffa9558711e7db59258b4d82aa"} Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.450492 4780 scope.go:117] "RemoveContainer" containerID="13bac1de014514ffc237cfed0ee9c382766a356d86ae2a12fa89885d9fbfefdf" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.450089 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.455745 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0" exitCode=0 Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.455784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0"} Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.455811 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18"} Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.472619 4780 scope.go:117] "RemoveContainer" containerID="318fee8e29163d3525860c81a3d64b97aee402e2685c6110d123189e07ae5797" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.488657 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.492893 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b8b66648-mcvdr"] Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.845706 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7"] Feb 19 08:27:37 crc kubenswrapper[4780]: E0219 08:27:37.846057 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a796501f-3442-4790-8740-6d6d68514d64" containerName="route-controller-manager" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.846087 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a796501f-3442-4790-8740-6d6d68514d64" containerName="route-controller-manager" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.846328 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a796501f-3442-4790-8740-6d6d68514d64" containerName="route-controller-manager" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.846974 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.851797 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.851808 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.851801 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.853104 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.853745 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.854274 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.859924 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7"] Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.882007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.882534 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9n7\" (UniqueName: \"kubernetes.io/projected/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-kube-api-access-9c9n7\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.882794 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-client-ca\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.883075 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-config\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.947096 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a796501f-3442-4790-8740-6d6d68514d64" path="/var/lib/kubelet/pods/a796501f-3442-4790-8740-6d6d68514d64/volumes" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.984999 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.985087 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9n7\" (UniqueName: \"kubernetes.io/projected/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-kube-api-access-9c9n7\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.985159 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-client-ca\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.985297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-config\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.987291 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-client-ca\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.987840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-config\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:37 crc kubenswrapper[4780]: I0219 08:27:37.995196 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-serving-cert\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:38 crc kubenswrapper[4780]: I0219 08:27:38.019791 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9n7\" (UniqueName: \"kubernetes.io/projected/bcf4bce3-2552-492e-a7c4-cdca9fdcf90b-kube-api-access-9c9n7\") pod \"route-controller-manager-7ccc5c98b4-89ln7\" (UID: \"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:38 crc kubenswrapper[4780]: I0219 08:27:38.182747 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:38 crc kubenswrapper[4780]: I0219 08:27:38.611312 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7"] Feb 19 08:27:38 crc kubenswrapper[4780]: W0219 08:27:38.626865 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf4bce3_2552_492e_a7c4_cdca9fdcf90b.slice/crio-0645e492e4444810ca1b3b1ef6d61b7f1dd025b3c7528d8b9cf2aa9f70a40ea2 WatchSource:0}: Error finding container 0645e492e4444810ca1b3b1ef6d61b7f1dd025b3c7528d8b9cf2aa9f70a40ea2: Status 404 returned error can't find the container with id 0645e492e4444810ca1b3b1ef6d61b7f1dd025b3c7528d8b9cf2aa9f70a40ea2 Feb 19 08:27:39 crc kubenswrapper[4780]: I0219 08:27:39.482223 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" event={"ID":"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b","Type":"ContainerStarted","Data":"3e71b2c65aafd2f6a94d21e414f9800518348fa468aab39fcd6e75377c5f650a"} Feb 19 08:27:39 crc kubenswrapper[4780]: I0219 08:27:39.482624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" event={"ID":"bcf4bce3-2552-492e-a7c4-cdca9fdcf90b","Type":"ContainerStarted","Data":"0645e492e4444810ca1b3b1ef6d61b7f1dd025b3c7528d8b9cf2aa9f70a40ea2"} Feb 19 08:27:39 crc kubenswrapper[4780]: I0219 08:27:39.483403 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:39 crc kubenswrapper[4780]: I0219 08:27:39.512988 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" podStartSLOduration=4.512963358 podStartE2EDuration="4.512963358s" podCreationTimestamp="2026-02-19 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:39.510811116 +0000 UTC m=+402.254468595" watchObservedRunningTime="2026-02-19 08:27:39.512963358 +0000 UTC m=+402.256620837" Feb 19 08:27:39 crc kubenswrapper[4780]: I0219 08:27:39.965807 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ccc5c98b4-89ln7" Feb 19 08:27:43 crc kubenswrapper[4780]: I0219 08:27:43.222854 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gtzf6" Feb 19 08:27:43 crc kubenswrapper[4780]: I0219 08:27:43.280492 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.323651 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" podUID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" containerName="registry" containerID="cri-o://6a39451dc49b8dc13e0111b61ba2bcc3004ef2bbada475ade3ed025fad125151" gracePeriod=30 Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.685914 4780 generic.go:334] "Generic (PLEG): container finished" podID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" containerID="6a39451dc49b8dc13e0111b61ba2bcc3004ef2bbada475ade3ed025fad125151" exitCode=0 Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.685961 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" event={"ID":"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f","Type":"ContainerDied","Data":"6a39451dc49b8dc13e0111b61ba2bcc3004ef2bbada475ade3ed025fad125151"} Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.685987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" event={"ID":"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f","Type":"ContainerDied","Data":"491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d"} Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.685997 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491471857da159e8552b3b8a1a11503982d044b0467700833e95083674678c4d" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.707966 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874105 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874241 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874285 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55knz\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874400 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874460 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874520 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.874850 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\" (UID: \"80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f\") " Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.875450 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.876219 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.881581 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.882672 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.886862 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz" (OuterVolumeSpecName: "kube-api-access-55knz") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "kube-api-access-55knz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.887829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.890753 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.892611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" (UID: "80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977098 4780 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977188 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977218 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55knz\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-kube-api-access-55knz\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977236 4780 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977253 4780 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977270 4780 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:08 crc kubenswrapper[4780]: I0219 08:28:08.977287 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:09 crc kubenswrapper[4780]: I0219 08:28:09.691783 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cv2g8" Feb 19 08:28:09 crc kubenswrapper[4780]: I0219 08:28:09.725091 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:28:09 crc kubenswrapper[4780]: I0219 08:28:09.728208 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cv2g8"] Feb 19 08:28:09 crc kubenswrapper[4780]: I0219 08:28:09.948173 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" path="/var/lib/kubelet/pods/80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f/volumes" Feb 19 08:29:36 crc kubenswrapper[4780]: I0219 08:29:36.336823 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:29:36 crc kubenswrapper[4780]: I0219 08:29:36.337453 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:29:58 crc kubenswrapper[4780]: I0219 08:29:58.360407 4780 scope.go:117] "RemoveContainer" containerID="6a39451dc49b8dc13e0111b61ba2bcc3004ef2bbada475ade3ed025fad125151" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.182856 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc"] Feb 19 08:30:00 crc kubenswrapper[4780]: E0219 08:30:00.183144 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" containerName="registry" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.183163 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" containerName="registry" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.183281 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cbb07a-c89f-46cb-b9ef-1dfdc4dc167f" containerName="registry" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.183715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.187468 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.187466 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.195746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pncdt\" (UniqueName: \"kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.195800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.195837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.202211 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc"] Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.296625 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pncdt\" (UniqueName: \"kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.296685 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.296732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.297666 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.303673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.314052 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pncdt\" (UniqueName: \"kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt\") pod \"collect-profiles-29524830-wl9dc\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.500991 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:00 crc kubenswrapper[4780]: I0219 08:30:00.702672 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc"] Feb 19 08:30:01 crc kubenswrapper[4780]: I0219 08:30:01.469764 4780 generic.go:334] "Generic (PLEG): container finished" podID="7e11610c-3763-4c6b-954a-bce83e2aec5a" containerID="d6b56a3c06eb41659aab416a6300badd7083bc5f86557902bdff626df48241c4" exitCode=0 Feb 19 08:30:01 crc kubenswrapper[4780]: I0219 08:30:01.469864 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" event={"ID":"7e11610c-3763-4c6b-954a-bce83e2aec5a","Type":"ContainerDied","Data":"d6b56a3c06eb41659aab416a6300badd7083bc5f86557902bdff626df48241c4"} Feb 19 08:30:01 crc kubenswrapper[4780]: I0219 08:30:01.472157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" event={"ID":"7e11610c-3763-4c6b-954a-bce83e2aec5a","Type":"ContainerStarted","Data":"b9cc071474b8afcb11ee43fc5e9236d9c2f48b9804ab98a76a7d206038a04cbd"} Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.733071 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.828665 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume\") pod \"7e11610c-3763-4c6b-954a-bce83e2aec5a\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.828746 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pncdt\" (UniqueName: \"kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt\") pod \"7e11610c-3763-4c6b-954a-bce83e2aec5a\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.828808 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume\") pod \"7e11610c-3763-4c6b-954a-bce83e2aec5a\" (UID: \"7e11610c-3763-4c6b-954a-bce83e2aec5a\") " Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.829462 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e11610c-3763-4c6b-954a-bce83e2aec5a" (UID: "7e11610c-3763-4c6b-954a-bce83e2aec5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.837963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt" (OuterVolumeSpecName: "kube-api-access-pncdt") pod "7e11610c-3763-4c6b-954a-bce83e2aec5a" (UID: "7e11610c-3763-4c6b-954a-bce83e2aec5a"). InnerVolumeSpecName "kube-api-access-pncdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.840248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e11610c-3763-4c6b-954a-bce83e2aec5a" (UID: "7e11610c-3763-4c6b-954a-bce83e2aec5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.930002 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e11610c-3763-4c6b-954a-bce83e2aec5a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.930048 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pncdt\" (UniqueName: \"kubernetes.io/projected/7e11610c-3763-4c6b-954a-bce83e2aec5a-kube-api-access-pncdt\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:02 crc kubenswrapper[4780]: I0219 08:30:02.930064 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e11610c-3763-4c6b-954a-bce83e2aec5a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:03 crc kubenswrapper[4780]: I0219 08:30:03.485881 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" event={"ID":"7e11610c-3763-4c6b-954a-bce83e2aec5a","Type":"ContainerDied","Data":"b9cc071474b8afcb11ee43fc5e9236d9c2f48b9804ab98a76a7d206038a04cbd"} Feb 19 08:30:03 crc kubenswrapper[4780]: I0219 08:30:03.485924 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9cc071474b8afcb11ee43fc5e9236d9c2f48b9804ab98a76a7d206038a04cbd" Feb 19 08:30:03 crc kubenswrapper[4780]: I0219 08:30:03.485947 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc" Feb 19 08:30:06 crc kubenswrapper[4780]: I0219 08:30:06.336546 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:30:06 crc kubenswrapper[4780]: I0219 08:30:06.337116 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.335751 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.336289 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.336346 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.337000 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.337068 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18" gracePeriod=600 Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.706892 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18" exitCode=0 Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.706983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18"} Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.707554 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c"} Feb 19 08:30:36 crc kubenswrapper[4780]: I0219 08:30:36.707603 4780 scope.go:117] "RemoveContainer" containerID="d4f5b86a0f96c708c0707b9d0e1e3124f0246704294706485733335212a268e0" Feb 19 08:32:36 crc kubenswrapper[4780]: I0219 08:32:36.336759 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:32:36 crc kubenswrapper[4780]: I0219 08:32:36.337384 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.391797 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skpt9"] Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395056 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-controller" containerID="cri-o://d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395278 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="nbdb" containerID="cri-o://0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395358 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-node" containerID="cri-o://5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395461 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-acl-logging" containerID="cri-o://b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395485 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="northd" containerID="cri-o://71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395604 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="sbdb" containerID="cri-o://9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.395865 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.469215 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" containerID="cri-o://ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" gracePeriod=30 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.770600 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/3.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.775722 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovn-acl-logging/0.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.776508 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovn-controller/0.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.777279 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873595 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873650 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873688 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873750 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873780 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873846 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873892 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873930 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873926 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873927 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.873974 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874023 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874155 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874172 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket" (OuterVolumeSpecName: "log-socket") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874187 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874241 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874302 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96p2w\" (UniqueName: \"kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874430 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874499 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log\") pod \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\" (UID: \"6e649075-d5ae-4d3a-b0af-b8f7f7784035\") " Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874734 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.874792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875106 4780 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875186 4780 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875215 4780 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875246 4780 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875273 4780 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875299 4780 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875326 4780 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875359 4780 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875386 4780 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875412 4780 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875438 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log" (OuterVolumeSpecName: "node-log") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875576 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875639 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash" (OuterVolumeSpecName: "host-slash") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.875679 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.876261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.876516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.881830 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.881979 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w" (OuterVolumeSpecName: "kube-api-access-96p2w") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "kube-api-access-96p2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.884691 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xb5cv"] Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885040 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-acl-logging" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885076 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-acl-logging" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885095 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885113 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885167 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885186 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885208 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-node" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885225 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-node" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885248 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885261 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885279 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885292 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885309 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885325 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885348 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885365 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885400 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="northd" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885418 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="northd" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885441 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="sbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885455 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="sbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885471 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="nbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885488 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="nbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885512 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kubecfg-setup" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885530 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kubecfg-setup" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.885558 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e11610c-3763-4c6b-954a-bce83e2aec5a" containerName="collect-profiles" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885576 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e11610c-3763-4c6b-954a-bce83e2aec5a" containerName="collect-profiles" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885788 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885824 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885845 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="northd" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885878 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="kube-rbac-proxy-node" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885897 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e11610c-3763-4c6b-954a-bce83e2aec5a" containerName="collect-profiles" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885919 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885939 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="nbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885965 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.885988 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.886006 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.886027 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="sbdb" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.886052 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovn-acl-logging" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.886076 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.886339 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.886360 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerName="ovnkube-controller" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.889054 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/2.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.889755 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/1.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.889859 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" containerID="58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac" exitCode=2 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.890555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerDied","Data":"58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.890623 4780 scope.go:117] "RemoveContainer" containerID="f0206282a6a8f120aef6e1b59d4207bf470fbff2d7635c5ff892191c5d6c91f3" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.890838 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.891302 4780 scope.go:117] "RemoveContainer" containerID="58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac" Feb 19 08:32:58 crc kubenswrapper[4780]: E0219 08:32:58.891611 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jgjfm_openshift-multus(c3eeec30-c76f-4ae2-9384-ebd13ac5eed5)\"" pod="openshift-multus/multus-jgjfm" podUID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.898564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6e649075-d5ae-4d3a-b0af-b8f7f7784035" (UID: "6e649075-d5ae-4d3a-b0af-b8f7f7784035"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.905687 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovnkube-controller/3.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.911664 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovn-acl-logging/0.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.912860 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-skpt9_6e649075-d5ae-4d3a-b0af-b8f7f7784035/ovn-controller/0.log" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.913896 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.913958 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.913988 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914000 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914013 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914022 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914031 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" exitCode=0 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914040 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" exitCode=143 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914049 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" exitCode=143 Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914176 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914512 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.914542 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916248 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916365 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916385 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916421 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916435 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916444 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916453 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916466 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916479 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916488 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916497 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916513 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916529 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916540 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916551 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916559 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916569 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916578 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916587 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916596 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916606 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916615 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916668 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916679 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916688 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916698 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916707 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916716 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916724 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916733 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916742 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916750 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skpt9" event={"ID":"6e649075-d5ae-4d3a-b0af-b8f7f7784035","Type":"ContainerDied","Data":"ead299c366ff1bcc4055f6197b6346e779c44b41dae9db9797f540255b993804"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916777 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916790 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916800 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916809 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916819 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916828 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916836 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916845 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916854 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.916862 4780 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.949871 4780 scope.go:117] "RemoveContainer" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovn-node-metrics-cert\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-slash\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976528 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-var-lib-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-ovn\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976578 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-env-overrides\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-netns\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976632 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-netd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976692 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976714 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzs5r\" (UniqueName: \"kubernetes.io/projected/f0758091-4c30-486b-9e86-cebadc0ee9d6-kube-api-access-pzs5r\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976736 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-bin\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976758 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-script-lib\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-etc-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976796 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-config\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-systemd-units\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-systemd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976867 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-log-socket\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976890 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976909 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-kubelet\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.976962 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-node-log\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977014 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977027 4780 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977038 4780 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977049 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96p2w\" (UniqueName: \"kubernetes.io/projected/6e649075-d5ae-4d3a-b0af-b8f7f7784035-kube-api-access-96p2w\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977062 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977074 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e649075-d5ae-4d3a-b0af-b8f7f7784035-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977085 4780 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977096 4780 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.977107 4780 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e649075-d5ae-4d3a-b0af-b8f7f7784035-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.981234 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skpt9"] Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.982990 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.984800 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skpt9"] Feb 19 08:32:58 crc kubenswrapper[4780]: I0219 08:32:58.998690 4780 scope.go:117] "RemoveContainer" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.016959 4780 scope.go:117] "RemoveContainer" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.036191 4780 scope.go:117] "RemoveContainer" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.052994 4780 scope.go:117] "RemoveContainer" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.066614 4780 scope.go:117] "RemoveContainer" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077621 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-netns\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-netd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077670 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzs5r\" (UniqueName: \"kubernetes.io/projected/f0758091-4c30-486b-9e86-cebadc0ee9d6-kube-api-access-pzs5r\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077755 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-netns\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077781 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-netd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.077945 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-bin\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-cni-bin\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-script-lib\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-etc-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078110 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-config\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-systemd-units\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-systemd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-log-socket\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-kubelet\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-node-log\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078307 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovn-node-metrics-cert\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078334 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-slash\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078391 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-var-lib-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078408 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-ovn\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-env-overrides\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078753 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078839 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-kubelet\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078862 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-systemd\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078894 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-log-socket\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078913 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-var-lib-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-run-ovn\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078902 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-etc-openvswitch\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-node-log\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078885 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-systemd-units\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.078968 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0758091-4c30-486b-9e86-cebadc0ee9d6-host-slash\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.079323 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-config\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.079653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-env-overrides\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.080412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovnkube-script-lib\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.082338 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0758091-4c30-486b-9e86-cebadc0ee9d6-ovn-node-metrics-cert\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.085553 4780 scope.go:117] "RemoveContainer" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.093157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzs5r\" (UniqueName: \"kubernetes.io/projected/f0758091-4c30-486b-9e86-cebadc0ee9d6-kube-api-access-pzs5r\") pod \"ovnkube-node-xb5cv\" (UID: \"f0758091-4c30-486b-9e86-cebadc0ee9d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.099045 4780 scope.go:117] "RemoveContainer" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.115182 4780 scope.go:117] "RemoveContainer" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.135968 4780 scope.go:117] "RemoveContainer" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.136609 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": container with ID starting with ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635 not found: ID does not exist" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.136689 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} err="failed to get container status \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": rpc error: code = NotFound desc = could not find container \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": container with ID starting with ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.136724 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.137469 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": container with ID starting with 6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc not found: ID does not exist" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.137520 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} err="failed to get container status \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": rpc error: code = NotFound desc = could not find container \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": container with ID starting with 6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.137553 4780 scope.go:117] "RemoveContainer" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.137877 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": container with ID starting with 9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3 not found: ID does not exist" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.137909 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} err="failed to get container status \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": rpc error: code = NotFound desc = could not find container \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": container with ID starting with 9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.137941 4780 scope.go:117] "RemoveContainer" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.138305 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": container with ID starting with 0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7 not found: ID does not exist" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.138453 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} err="failed to get container status \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": rpc error: code = NotFound desc = could not find container \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": container with ID starting with 0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.138553 4780 scope.go:117] "RemoveContainer" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.139246 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": container with ID starting with 71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac not found: ID does not exist" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.139377 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} err="failed to get container status \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": rpc error: code = NotFound desc = could not find container \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": container with ID starting with 71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.139517 4780 scope.go:117] "RemoveContainer" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.140022 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": container with ID starting with d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947 not found: ID does not exist" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.140057 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} err="failed to get container status \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": rpc error: code = NotFound desc = could not find container \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": container with ID starting with d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.140107 4780 scope.go:117] "RemoveContainer" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.140477 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": container with ID starting with 5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81 not found: ID does not exist" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.140518 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} err="failed to get container status \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": rpc error: code = NotFound desc = could not find container \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": container with ID starting with 5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.140540 4780 scope.go:117] "RemoveContainer" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.141172 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": container with ID starting with b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6 not found: ID does not exist" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.141205 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} err="failed to get container status \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": rpc error: code = NotFound desc = could not find container \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": container with ID starting with b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.141225 4780 scope.go:117] "RemoveContainer" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.141548 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": container with ID starting with d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746 not found: ID does not exist" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.141592 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} err="failed to get container status \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": rpc error: code = NotFound desc = could not find container \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": container with ID starting with d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.141622 4780 scope.go:117] "RemoveContainer" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: E0219 08:32:59.142001 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": container with ID starting with dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6 not found: ID does not exist" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.142112 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} err="failed to get container status \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": rpc error: code = NotFound desc = could not find container \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": container with ID starting with dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.142234 4780 scope.go:117] "RemoveContainer" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.142864 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} err="failed to get container status \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": rpc error: code = NotFound desc = could not find container \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": container with ID starting with ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.142902 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.143280 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} err="failed to get container status \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": rpc error: code = NotFound desc = could not find container \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": container with ID starting with 6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.143582 4780 scope.go:117] "RemoveContainer" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144058 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} err="failed to get container status \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": rpc error: code = NotFound desc = could not find container \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": container with ID starting with 9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144090 4780 scope.go:117] "RemoveContainer" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144425 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} err="failed to get container status \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": rpc error: code = NotFound desc = could not find container \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": container with ID starting with 0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144454 4780 scope.go:117] "RemoveContainer" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144770 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} err="failed to get container status \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": rpc error: code = NotFound desc = could not find container \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": container with ID starting with 71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.144841 4780 scope.go:117] "RemoveContainer" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.145288 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} err="failed to get container status \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": rpc error: code = NotFound desc = could not find container \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": container with ID starting with d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.145319 4780 scope.go:117] "RemoveContainer" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.145699 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} err="failed to get container status \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": rpc error: code = NotFound desc = could not find container \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": container with ID starting with 5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.145724 4780 scope.go:117] "RemoveContainer" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.146683 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} err="failed to get container status \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": rpc error: code = NotFound desc = could not find container \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": container with ID starting with b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.146808 4780 scope.go:117] "RemoveContainer" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.147344 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} err="failed to get container status \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": rpc error: code = NotFound desc = could not find container \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": container with ID starting with d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.147452 4780 scope.go:117] "RemoveContainer" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.147928 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} err="failed to get container status \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": rpc error: code = NotFound desc = could not find container \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": container with ID starting with dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.147956 4780 scope.go:117] "RemoveContainer" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.148364 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} err="failed to get container status \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": rpc error: code = NotFound desc = could not find container \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": container with ID starting with ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.148501 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.148888 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} err="failed to get container status \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": rpc error: code = NotFound desc = could not find container \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": container with ID starting with 6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.148913 4780 scope.go:117] "RemoveContainer" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.149401 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} err="failed to get container status \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": rpc error: code = NotFound desc = could not find container \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": container with ID starting with 9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.149585 4780 scope.go:117] "RemoveContainer" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.150000 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} err="failed to get container status \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": rpc error: code = NotFound desc = could not find container \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": container with ID starting with 0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.150184 4780 scope.go:117] "RemoveContainer" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.150542 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} err="failed to get container status \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": rpc error: code = NotFound desc = could not find container \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": container with ID starting with 71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.150704 4780 scope.go:117] "RemoveContainer" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.151053 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} err="failed to get container status \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": rpc error: code = NotFound desc = could not find container \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": container with ID starting with d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.151216 4780 scope.go:117] "RemoveContainer" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.151649 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} err="failed to get container status \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": rpc error: code = NotFound desc = could not find container \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": container with ID starting with 5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.151675 4780 scope.go:117] "RemoveContainer" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.152069 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} err="failed to get container status \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": rpc error: code = NotFound desc = could not find container \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": container with ID starting with b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.152219 4780 scope.go:117] "RemoveContainer" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.152620 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} err="failed to get container status \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": rpc error: code = NotFound desc = could not find container \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": container with ID starting with d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.152756 4780 scope.go:117] "RemoveContainer" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.153188 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} err="failed to get container status \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": rpc error: code = NotFound desc = could not find container \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": container with ID starting with dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.153222 4780 scope.go:117] "RemoveContainer" containerID="ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.153584 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635"} err="failed to get container status \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": rpc error: code = NotFound desc = could not find container \"ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635\": container with ID starting with ad85c128a69ca4c6225d308e9c092716c42dec79349f622c1ca4faa4e923e635 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.153642 4780 scope.go:117] "RemoveContainer" containerID="6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154053 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc"} err="failed to get container status \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": rpc error: code = NotFound desc = could not find container \"6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc\": container with ID starting with 6a32a150283f8ade98cfdc4ecc076ce62229a90932a6423bac2d60254cd3bafc not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154202 4780 scope.go:117] "RemoveContainer" containerID="9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154601 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3"} err="failed to get container status \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": rpc error: code = NotFound desc = could not find container \"9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3\": container with ID starting with 9aa2ae16256f82558cd3c2267ec44858536fc0263a83c493edec0497f32d77d3 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154628 4780 scope.go:117] "RemoveContainer" containerID="0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154889 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7"} err="failed to get container status \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": rpc error: code = NotFound desc = could not find container \"0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7\": container with ID starting with 0173154ea0906fe01bcc8d407da1c61b861158b6d5442a1e113bccbce568f0c7 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.154921 4780 scope.go:117] "RemoveContainer" containerID="71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155191 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac"} err="failed to get container status \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": rpc error: code = NotFound desc = could not find container \"71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac\": container with ID starting with 71b9b0bc480de7de72547a201fda763c60b0db8dd61498f8afaf642ff386b9ac not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155214 4780 scope.go:117] "RemoveContainer" containerID="d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155416 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947"} err="failed to get container status \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": rpc error: code = NotFound desc = could not find container \"d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947\": container with ID starting with d1bc8c02070cae9c84577c124fbe6897d46ec17faa0415226f042c135c06d947 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155508 4780 scope.go:117] "RemoveContainer" containerID="5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155818 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81"} err="failed to get container status \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": rpc error: code = NotFound desc = could not find container \"5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81\": container with ID starting with 5f5b359f4e1468f3abb0990513592b914b27087d3e01d8759cc4e1b5f1ad6b81 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.155923 4780 scope.go:117] "RemoveContainer" containerID="b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.156259 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6"} err="failed to get container status \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": rpc error: code = NotFound desc = could not find container \"b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6\": container with ID starting with b118d23b91e782a7a13c5cd1fb87e8c2718115ad9ebfb468fcb6d0cf5602b8a6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.156282 4780 scope.go:117] "RemoveContainer" containerID="d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.156508 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746"} err="failed to get container status \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": rpc error: code = NotFound desc = could not find container \"d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746\": container with ID starting with d18bb6190888476dd006d695f26db0b192708571ade939a8609348d3708d7746 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.156600 4780 scope.go:117] "RemoveContainer" containerID="dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.156939 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6"} err="failed to get container status \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": rpc error: code = NotFound desc = could not find container \"dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6\": container with ID starting with dd6c089c8f08ec68f3a56409f98751cf549763c38d369f503e90819831fb79c6 not found: ID does not exist" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.221594 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.924320 4780 generic.go:334] "Generic (PLEG): container finished" podID="f0758091-4c30-486b-9e86-cebadc0ee9d6" containerID="346d77dbbd18f616942692b71356f51e733c33d4953b3e784c3e75c87fb3b2ae" exitCode=0 Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.924441 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerDied","Data":"346d77dbbd18f616942692b71356f51e733c33d4953b3e784c3e75c87fb3b2ae"} Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.924728 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"b5dda481ffcca62d6debb8910212f8b123de64d73313e6b30081be32c14de74e"} Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.927667 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/2.log" Feb 19 08:32:59 crc kubenswrapper[4780]: I0219 08:32:59.945416 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e649075-d5ae-4d3a-b0af-b8f7f7784035" path="/var/lib/kubelet/pods/6e649075-d5ae-4d3a-b0af-b8f7f7784035/volumes" Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.942673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"92ce2fcd0f443f61e996b0fef1d2a005dc5ac8ecf20b05c996df346a9b7bba6b"} Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.943001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"e1294d30d3a330fdece35de3bb4e9bce806185c04d6850769fe65d8ff0530d91"} Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.943012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"fc61d72e7a6c8eecfe37c2d720993ceaca055059ecb53cb09a456b12981f6ead"} Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.943020 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"92f86a05bcc40c17a8a7aa8781e6ea2c9f696154b69865060d2f21ef2df6a1f3"} Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.943028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"2d923bcb7aa59502bae1dcc993f4624fd6523321442007de7668c0247fc80981"} Feb 19 08:33:00 crc kubenswrapper[4780]: I0219 08:33:00.943037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"2392ec993f81851da780d81c0cb6be7ba6119d8295eba68022e6a6fe7fe4519b"} Feb 19 08:33:03 crc kubenswrapper[4780]: I0219 08:33:03.978046 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"512d8ebb5e2b17021ac5924523f71b105aa484814ac461e490446a17686cbc18"} Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.677804 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xsd65"] Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.678959 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.683089 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.683119 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.683501 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.683719 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r5h5z" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.768229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.768319 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.768442 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn2c\" (UniqueName: \"kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.870039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn2c\" (UniqueName: \"kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.870248 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.870332 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.870672 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.871519 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.904654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn2c\" (UniqueName: \"kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c\") pod \"crc-storage-crc-xsd65\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:04 crc kubenswrapper[4780]: I0219 08:33:04.996590 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:05 crc kubenswrapper[4780]: E0219 08:33:05.037492 4780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(909ca7f6ce6e2791f2edf69660e6addc49f927eeca89b2fdaaf3f46d5714103c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:33:05 crc kubenswrapper[4780]: E0219 08:33:05.038101 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(909ca7f6ce6e2791f2edf69660e6addc49f927eeca89b2fdaaf3f46d5714103c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:05 crc kubenswrapper[4780]: E0219 08:33:05.038232 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(909ca7f6ce6e2791f2edf69660e6addc49f927eeca89b2fdaaf3f46d5714103c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:05 crc kubenswrapper[4780]: E0219 08:33:05.038333 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(909ca7f6ce6e2791f2edf69660e6addc49f927eeca89b2fdaaf3f46d5714103c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xsd65" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" Feb 19 08:33:05 crc kubenswrapper[4780]: I0219 08:33:05.994371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" event={"ID":"f0758091-4c30-486b-9e86-cebadc0ee9d6","Type":"ContainerStarted","Data":"b80f464d212856d6ade0b9c3d19d3c88d4c4f815e6d05c1a70f4ccb517163c03"} Feb 19 08:33:05 crc kubenswrapper[4780]: I0219 08:33:05.994762 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:05 crc kubenswrapper[4780]: I0219 08:33:05.994802 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.027113 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" podStartSLOduration=8.027088485 podStartE2EDuration="8.027088485s" podCreationTimestamp="2026-02-19 08:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:33:06.025931096 +0000 UTC m=+728.769588585" watchObservedRunningTime="2026-02-19 08:33:06.027088485 +0000 UTC m=+728.770745944" Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.034422 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.066540 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xsd65"] Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.067071 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.067555 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:06 crc kubenswrapper[4780]: E0219 08:33:06.106018 4780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(7d786ebd40aee4cd25fca0941d796b168b5ce02a627486a2314b7130d5a62aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:33:06 crc kubenswrapper[4780]: E0219 08:33:06.106106 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(7d786ebd40aee4cd25fca0941d796b168b5ce02a627486a2314b7130d5a62aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:06 crc kubenswrapper[4780]: E0219 08:33:06.106179 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(7d786ebd40aee4cd25fca0941d796b168b5ce02a627486a2314b7130d5a62aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:06 crc kubenswrapper[4780]: E0219 08:33:06.106247 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(7d786ebd40aee4cd25fca0941d796b168b5ce02a627486a2314b7130d5a62aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xsd65" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.335983 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:33:06 crc kubenswrapper[4780]: I0219 08:33:06.336096 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:33:07 crc kubenswrapper[4780]: I0219 08:33:07.001055 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:07 crc kubenswrapper[4780]: I0219 08:33:07.047767 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:12 crc kubenswrapper[4780]: I0219 08:33:12.938802 4780 scope.go:117] "RemoveContainer" containerID="58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac" Feb 19 08:33:12 crc kubenswrapper[4780]: E0219 08:33:12.941541 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jgjfm_openshift-multus(c3eeec30-c76f-4ae2-9384-ebd13ac5eed5)\"" pod="openshift-multus/multus-jgjfm" podUID="c3eeec30-c76f-4ae2-9384-ebd13ac5eed5" Feb 19 08:33:20 crc kubenswrapper[4780]: I0219 08:33:20.937723 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:20 crc kubenswrapper[4780]: I0219 08:33:20.938586 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:20 crc kubenswrapper[4780]: E0219 08:33:20.986787 4780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(56c5324647492738469adca49a51fcdf7c8e99bfcad0d79fd8e307346501869e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:33:20 crc kubenswrapper[4780]: E0219 08:33:20.986883 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(56c5324647492738469adca49a51fcdf7c8e99bfcad0d79fd8e307346501869e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:20 crc kubenswrapper[4780]: E0219 08:33:20.986919 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(56c5324647492738469adca49a51fcdf7c8e99bfcad0d79fd8e307346501869e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:20 crc kubenswrapper[4780]: E0219 08:33:20.986982 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xsd65_crc-storage(2e68e1ee-ab83-4f62-91da-1f5fd9c051e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xsd65_crc-storage_2e68e1ee-ab83-4f62-91da-1f5fd9c051e6_0(56c5324647492738469adca49a51fcdf7c8e99bfcad0d79fd8e307346501869e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xsd65" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" Feb 19 08:33:24 crc kubenswrapper[4780]: I0219 08:33:24.938698 4780 scope.go:117] "RemoveContainer" containerID="58daa9b743d50a852f8a8d1ead5c1400cc941d3471ab2603c155aed626ec9aac" Feb 19 08:33:26 crc kubenswrapper[4780]: I0219 08:33:26.137901 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgjfm_c3eeec30-c76f-4ae2-9384-ebd13ac5eed5/kube-multus/2.log" Feb 19 08:33:26 crc kubenswrapper[4780]: I0219 08:33:26.138412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgjfm" event={"ID":"c3eeec30-c76f-4ae2-9384-ebd13ac5eed5","Type":"ContainerStarted","Data":"87b3e0bdee92eba71a78bed8f34abf8a05a98ce961049115e87b933f03ec1b3b"} Feb 19 08:33:29 crc kubenswrapper[4780]: I0219 08:33:29.250427 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xb5cv" Feb 19 08:33:35 crc kubenswrapper[4780]: I0219 08:33:35.937613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:35 crc kubenswrapper[4780]: I0219 08:33:35.938517 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.155408 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xsd65"] Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.171629 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.197352 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xsd65" event={"ID":"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6","Type":"ContainerStarted","Data":"868249718bc4f2146c0ca32ae64197ec8fa088a85c3a5695b2ce11d3366576a7"} Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.335705 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.335755 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.335801 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.336316 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:33:36 crc kubenswrapper[4780]: I0219 08:33:36.336376 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c" gracePeriod=600 Feb 19 08:33:37 crc kubenswrapper[4780]: I0219 08:33:37.207969 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c" exitCode=0 Feb 19 08:33:37 crc kubenswrapper[4780]: I0219 08:33:37.208002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c"} Feb 19 08:33:37 crc kubenswrapper[4780]: I0219 08:33:37.208413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525"} Feb 19 08:33:37 crc kubenswrapper[4780]: I0219 08:33:37.208443 4780 scope.go:117] "RemoveContainer" containerID="0141248a6e9e107b13bbd82e62fd102183654747a9d088dafde81ee055021a18" Feb 19 08:33:38 crc kubenswrapper[4780]: I0219 08:33:38.217709 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" containerID="a1bf9a294a41019bbb79d5fb1278795d943b9f54258bb31a18bfe3714e600b5d" exitCode=0 Feb 19 08:33:38 crc kubenswrapper[4780]: I0219 08:33:38.217824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xsd65" event={"ID":"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6","Type":"ContainerDied","Data":"a1bf9a294a41019bbb79d5fb1278795d943b9f54258bb31a18bfe3714e600b5d"} Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.476460 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.599805 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt\") pod \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.599905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn2c\" (UniqueName: \"kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c\") pod \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.599956 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage\") pod \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\" (UID: \"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6\") " Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.601093 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" (UID: "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.605950 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c" (OuterVolumeSpecName: "kube-api-access-5dn2c") pod "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" (UID: "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6"). InnerVolumeSpecName "kube-api-access-5dn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.626887 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" (UID: "2e68e1ee-ab83-4f62-91da-1f5fd9c051e6"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.701758 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.701815 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn2c\" (UniqueName: \"kubernetes.io/projected/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-kube-api-access-5dn2c\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:39 crc kubenswrapper[4780]: I0219 08:33:39.701827 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:40 crc kubenswrapper[4780]: I0219 08:33:40.235428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xsd65" event={"ID":"2e68e1ee-ab83-4f62-91da-1f5fd9c051e6","Type":"ContainerDied","Data":"868249718bc4f2146c0ca32ae64197ec8fa088a85c3a5695b2ce11d3366576a7"} Feb 19 08:33:40 crc kubenswrapper[4780]: I0219 08:33:40.235475 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868249718bc4f2146c0ca32ae64197ec8fa088a85c3a5695b2ce11d3366576a7" Feb 19 08:33:40 crc kubenswrapper[4780]: I0219 08:33:40.235517 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xsd65" Feb 19 08:33:43 crc kubenswrapper[4780]: I0219 08:33:43.984868 4780 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.964906 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq"] Feb 19 08:33:46 crc kubenswrapper[4780]: E0219 08:33:46.965435 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" containerName="storage" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.965453 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" containerName="storage" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.965591 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" containerName="storage" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.966485 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.967826 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:33:46 crc kubenswrapper[4780]: I0219 08:33:46.980454 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq"] Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.097898 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.097937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.097975 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6t5\" (UniqueName: \"kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.198781 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.198827 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.198850 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6t5\" (UniqueName: \"kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.199520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.199664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.241006 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6t5\" (UniqueName: \"kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.281598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:47 crc kubenswrapper[4780]: I0219 08:33:47.477642 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq"] Feb 19 08:33:48 crc kubenswrapper[4780]: I0219 08:33:48.298718 4780 generic.go:334] "Generic (PLEG): container finished" podID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerID="2bb9087129c1344597a6f42c6ec24b47b33deeec060d9fa71fb18fcf8c7b243b" exitCode=0 Feb 19 08:33:48 crc kubenswrapper[4780]: I0219 08:33:48.298764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" event={"ID":"8eabe830-a7df-46f3-840e-5585eae95c5a","Type":"ContainerDied","Data":"2bb9087129c1344597a6f42c6ec24b47b33deeec060d9fa71fb18fcf8c7b243b"} Feb 19 08:33:48 crc kubenswrapper[4780]: I0219 08:33:48.298794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" event={"ID":"8eabe830-a7df-46f3-840e-5585eae95c5a","Type":"ContainerStarted","Data":"a13275048cb1777820aa3812dcf809a15e048b50f5cd313c0efb7d2277006c6f"} Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.259817 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.260993 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.284284 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.427553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.427746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9tw\" (UniqueName: \"kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.427886 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.529339 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.529686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.529742 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9tw\" (UniqueName: \"kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.530360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.530624 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.553110 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9tw\" (UniqueName: \"kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw\") pod \"redhat-operators-xb5q5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.584225 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:49 crc kubenswrapper[4780]: I0219 08:33:49.811382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:33:49 crc kubenswrapper[4780]: W0219 08:33:49.818321 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f008c0_58b7_4313_96e7_16344d7621d5.slice/crio-95fb4613956ed48f768307a70812e0aa15accece1d7d8f827fe9650cf582c3af WatchSource:0}: Error finding container 95fb4613956ed48f768307a70812e0aa15accece1d7d8f827fe9650cf582c3af: Status 404 returned error can't find the container with id 95fb4613956ed48f768307a70812e0aa15accece1d7d8f827fe9650cf582c3af Feb 19 08:33:50 crc kubenswrapper[4780]: I0219 08:33:50.309100 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerID="267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e" exitCode=0 Feb 19 08:33:50 crc kubenswrapper[4780]: I0219 08:33:50.309164 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerDied","Data":"267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e"} Feb 19 08:33:50 crc kubenswrapper[4780]: I0219 08:33:50.309460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerStarted","Data":"95fb4613956ed48f768307a70812e0aa15accece1d7d8f827fe9650cf582c3af"} Feb 19 08:33:50 crc kubenswrapper[4780]: I0219 08:33:50.311241 4780 generic.go:334] "Generic (PLEG): container finished" podID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerID="7728bbee02601dc1e1de7c9d41442f2867ffb1f9283e9274bbe149f95c840384" exitCode=0 Feb 19 08:33:50 crc kubenswrapper[4780]: I0219 08:33:50.311275 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" event={"ID":"8eabe830-a7df-46f3-840e-5585eae95c5a","Type":"ContainerDied","Data":"7728bbee02601dc1e1de7c9d41442f2867ffb1f9283e9274bbe149f95c840384"} Feb 19 08:33:51 crc kubenswrapper[4780]: I0219 08:33:51.318596 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerStarted","Data":"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1"} Feb 19 08:33:51 crc kubenswrapper[4780]: I0219 08:33:51.321082 4780 generic.go:334] "Generic (PLEG): container finished" podID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerID="7968742a7499fcd79501f8d8cdf2118fca4d3340bbaaaf350bdb5df84d523fca" exitCode=0 Feb 19 08:33:51 crc kubenswrapper[4780]: I0219 08:33:51.321177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" event={"ID":"8eabe830-a7df-46f3-840e-5585eae95c5a","Type":"ContainerDied","Data":"7968742a7499fcd79501f8d8cdf2118fca4d3340bbaaaf350bdb5df84d523fca"} Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.331692 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerID="3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1" exitCode=0 Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.331828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerDied","Data":"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1"} Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.683572 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.867525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle\") pod \"8eabe830-a7df-46f3-840e-5585eae95c5a\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.867834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util\") pod \"8eabe830-a7df-46f3-840e-5585eae95c5a\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.867874 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6t5\" (UniqueName: \"kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5\") pod \"8eabe830-a7df-46f3-840e-5585eae95c5a\" (UID: \"8eabe830-a7df-46f3-840e-5585eae95c5a\") " Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.868421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle" (OuterVolumeSpecName: "bundle") pod "8eabe830-a7df-46f3-840e-5585eae95c5a" (UID: "8eabe830-a7df-46f3-840e-5585eae95c5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.873954 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5" (OuterVolumeSpecName: "kube-api-access-qq6t5") pod "8eabe830-a7df-46f3-840e-5585eae95c5a" (UID: "8eabe830-a7df-46f3-840e-5585eae95c5a"). InnerVolumeSpecName "kube-api-access-qq6t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.887967 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util" (OuterVolumeSpecName: "util") pod "8eabe830-a7df-46f3-840e-5585eae95c5a" (UID: "8eabe830-a7df-46f3-840e-5585eae95c5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.969903 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.969937 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8eabe830-a7df-46f3-840e-5585eae95c5a-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:52 crc kubenswrapper[4780]: I0219 08:33:52.969954 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6t5\" (UniqueName: \"kubernetes.io/projected/8eabe830-a7df-46f3-840e-5585eae95c5a-kube-api-access-qq6t5\") on node \"crc\" DevicePath \"\"" Feb 19 08:33:53 crc kubenswrapper[4780]: I0219 08:33:53.340159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerStarted","Data":"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8"} Feb 19 08:33:53 crc kubenswrapper[4780]: I0219 08:33:53.342708 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" event={"ID":"8eabe830-a7df-46f3-840e-5585eae95c5a","Type":"ContainerDied","Data":"a13275048cb1777820aa3812dcf809a15e048b50f5cd313c0efb7d2277006c6f"} Feb 19 08:33:53 crc kubenswrapper[4780]: I0219 08:33:53.342786 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13275048cb1777820aa3812dcf809a15e048b50f5cd313c0efb7d2277006c6f" Feb 19 08:33:53 crc kubenswrapper[4780]: I0219 08:33:53.342734 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq" Feb 19 08:33:53 crc kubenswrapper[4780]: I0219 08:33:53.358745 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xb5q5" podStartSLOduration=1.9074001489999999 podStartE2EDuration="4.358725464s" podCreationTimestamp="2026-02-19 08:33:49 +0000 UTC" firstStartedPulling="2026-02-19 08:33:50.310684908 +0000 UTC m=+773.054342357" lastFinishedPulling="2026-02-19 08:33:52.762010223 +0000 UTC m=+775.505667672" observedRunningTime="2026-02-19 08:33:53.353792252 +0000 UTC m=+776.097449711" watchObservedRunningTime="2026-02-19 08:33:53.358725464 +0000 UTC m=+776.102382923" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.382178 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nm99k"] Feb 19 08:33:57 crc kubenswrapper[4780]: E0219 08:33:57.382753 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="pull" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.382768 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="pull" Feb 19 08:33:57 crc kubenswrapper[4780]: E0219 08:33:57.382793 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="extract" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.382801 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="extract" Feb 19 08:33:57 crc kubenswrapper[4780]: E0219 08:33:57.382812 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="util" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.382820 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="util" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.382962 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eabe830-a7df-46f3-840e-5585eae95c5a" containerName="extract" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.383447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.385182 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.385298 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tgkcl" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.385617 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.392043 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nm99k"] Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.529217 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgtkn\" (UniqueName: \"kubernetes.io/projected/43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54-kube-api-access-bgtkn\") pod \"nmstate-operator-694c9596b7-nm99k\" (UID: \"43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.630352 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgtkn\" (UniqueName: \"kubernetes.io/projected/43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54-kube-api-access-bgtkn\") pod \"nmstate-operator-694c9596b7-nm99k\" (UID: \"43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.654586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgtkn\" (UniqueName: \"kubernetes.io/projected/43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54-kube-api-access-bgtkn\") pod \"nmstate-operator-694c9596b7-nm99k\" (UID: \"43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" Feb 19 08:33:57 crc kubenswrapper[4780]: I0219 08:33:57.709117 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" Feb 19 08:33:58 crc kubenswrapper[4780]: I0219 08:33:58.131519 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nm99k"] Feb 19 08:33:58 crc kubenswrapper[4780]: W0219 08:33:58.140058 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fa6e0c_4b7c_4bcd_b9f3_f9ac6e146e54.slice/crio-014df6dd2b72d7b757ccf7e5ce96333894d259161af92221f776efa0766d3ae9 WatchSource:0}: Error finding container 014df6dd2b72d7b757ccf7e5ce96333894d259161af92221f776efa0766d3ae9: Status 404 returned error can't find the container with id 014df6dd2b72d7b757ccf7e5ce96333894d259161af92221f776efa0766d3ae9 Feb 19 08:33:58 crc kubenswrapper[4780]: I0219 08:33:58.368626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" event={"ID":"43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54","Type":"ContainerStarted","Data":"014df6dd2b72d7b757ccf7e5ce96333894d259161af92221f776efa0766d3ae9"} Feb 19 08:33:59 crc kubenswrapper[4780]: I0219 08:33:59.584807 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:59 crc kubenswrapper[4780]: I0219 08:33:59.584853 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:33:59 crc kubenswrapper[4780]: I0219 08:33:59.624989 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:34:00 crc kubenswrapper[4780]: I0219 08:34:00.380832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" event={"ID":"43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54","Type":"ContainerStarted","Data":"0cc37274c0591f06e3af2c08fcde354989f2abc4a60d443c01b1fb293cecf554"} Feb 19 08:34:00 crc kubenswrapper[4780]: I0219 08:34:00.402609 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-nm99k" podStartSLOduration=1.470889471 podStartE2EDuration="3.402589548s" podCreationTimestamp="2026-02-19 08:33:57 +0000 UTC" firstStartedPulling="2026-02-19 08:33:58.143299521 +0000 UTC m=+780.886956970" lastFinishedPulling="2026-02-19 08:34:00.074999598 +0000 UTC m=+782.818657047" observedRunningTime="2026-02-19 08:34:00.399920093 +0000 UTC m=+783.143577542" watchObservedRunningTime="2026-02-19 08:34:00.402589548 +0000 UTC m=+783.146246997" Feb 19 08:34:00 crc kubenswrapper[4780]: I0219 08:34:00.421068 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:34:02 crc kubenswrapper[4780]: I0219 08:34:02.245249 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:34:02 crc kubenswrapper[4780]: I0219 08:34:02.391746 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xb5q5" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="registry-server" containerID="cri-o://340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8" gracePeriod=2 Feb 19 08:34:03 crc kubenswrapper[4780]: I0219 08:34:03.936342 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.109255 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9tw\" (UniqueName: \"kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw\") pod \"e6f008c0-58b7-4313-96e7-16344d7621d5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.109412 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content\") pod \"e6f008c0-58b7-4313-96e7-16344d7621d5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.109544 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities\") pod \"e6f008c0-58b7-4313-96e7-16344d7621d5\" (UID: \"e6f008c0-58b7-4313-96e7-16344d7621d5\") " Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.110563 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities" (OuterVolumeSpecName: "utilities") pod "e6f008c0-58b7-4313-96e7-16344d7621d5" (UID: "e6f008c0-58b7-4313-96e7-16344d7621d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.121183 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw" (OuterVolumeSpecName: "kube-api-access-wk9tw") pod "e6f008c0-58b7-4313-96e7-16344d7621d5" (UID: "e6f008c0-58b7-4313-96e7-16344d7621d5"). InnerVolumeSpecName "kube-api-access-wk9tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.211807 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9tw\" (UniqueName: \"kubernetes.io/projected/e6f008c0-58b7-4313-96e7-16344d7621d5-kube-api-access-wk9tw\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.211840 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.298168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6f008c0-58b7-4313-96e7-16344d7621d5" (UID: "e6f008c0-58b7-4313-96e7-16344d7621d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.312815 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f008c0-58b7-4313-96e7-16344d7621d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.405403 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerID="340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8" exitCode=0 Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.405462 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb5q5" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.405465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerDied","Data":"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8"} Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.405621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb5q5" event={"ID":"e6f008c0-58b7-4313-96e7-16344d7621d5","Type":"ContainerDied","Data":"95fb4613956ed48f768307a70812e0aa15accece1d7d8f827fe9650cf582c3af"} Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.405655 4780 scope.go:117] "RemoveContainer" containerID="340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.432749 4780 scope.go:117] "RemoveContainer" containerID="3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.465068 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.480402 4780 scope.go:117] "RemoveContainer" containerID="267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.483963 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xb5q5"] Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.506856 4780 scope.go:117] "RemoveContainer" containerID="340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8" Feb 19 08:34:04 crc kubenswrapper[4780]: E0219 08:34:04.507599 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8\": container with ID starting with 340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8 not found: ID does not exist" containerID="340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.507666 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8"} err="failed to get container status \"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8\": rpc error: code = NotFound desc = could not find container \"340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8\": container with ID starting with 340c8c0e784bc510bd3595be3a03f81be6c29a00d6ff3ab241c09e1459c3d9a8 not found: ID does not exist" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.507706 4780 scope.go:117] "RemoveContainer" containerID="3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1" Feb 19 08:34:04 crc kubenswrapper[4780]: E0219 08:34:04.508079 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1\": container with ID starting with 3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1 not found: ID does not exist" containerID="3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.508114 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1"} err="failed to get container status \"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1\": rpc error: code = NotFound desc = could not find container \"3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1\": container with ID starting with 3bf6f9c9bb3d30a4e48c67cb44d7569e82baa3afd6eaf0317e34fc3a938c92d1 not found: ID does not exist" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.508150 4780 scope.go:117] "RemoveContainer" containerID="267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e" Feb 19 08:34:04 crc kubenswrapper[4780]: E0219 08:34:04.508454 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e\": container with ID starting with 267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e not found: ID does not exist" containerID="267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e" Feb 19 08:34:04 crc kubenswrapper[4780]: I0219 08:34:04.508480 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e"} err="failed to get container status \"267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e\": rpc error: code = NotFound desc = could not find container \"267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e\": container with ID starting with 267d55be8e53075c275451276bf151cf30a749e7f78ddce533f7a6fe6b1c826e not found: ID does not exist" Feb 19 08:34:05 crc kubenswrapper[4780]: I0219 08:34:05.949050 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" path="/var/lib/kubelet/pods/e6f008c0-58b7-4313-96e7-16344d7621d5/volumes" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.176258 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-btq8f"] Feb 19 08:34:07 crc kubenswrapper[4780]: E0219 08:34:07.176813 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="extract-utilities" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.176832 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="extract-utilities" Feb 19 08:34:07 crc kubenswrapper[4780]: E0219 08:34:07.176843 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="registry-server" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.176850 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="registry-server" Feb 19 08:34:07 crc kubenswrapper[4780]: E0219 08:34:07.176862 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="extract-content" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.176872 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="extract-content" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.176994 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f008c0-58b7-4313-96e7-16344d7621d5" containerName="registry-server" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.177652 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.180421 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-h6h48" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.194900 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.195649 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.201863 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.205861 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-btq8f"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.231944 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-c8gqg"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.232892 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.248819 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.330561 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.331283 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.333417 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.335025 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-c5wst" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.335409 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.351709 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-dbus-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.351775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwzn\" (UniqueName: \"kubernetes.io/projected/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-kube-api-access-6bwzn\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.351802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvk8\" (UniqueName: \"kubernetes.io/projected/1cba9428-b26b-44ee-84c5-cac06ce86f4d-kube-api-access-jxvk8\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.351974 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj69n\" (UniqueName: \"kubernetes.io/projected/f48e99ea-198f-48d8-b2ef-83602d80118b-kube-api-access-cj69n\") pod \"nmstate-metrics-58c85c668d-btq8f\" (UID: \"f48e99ea-198f-48d8-b2ef-83602d80118b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.352098 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-nmstate-lock\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.352176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.352202 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-ovs-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.352968 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453145 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-dbus-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453205 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453246 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bwzn\" (UniqueName: \"kubernetes.io/projected/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-kube-api-access-6bwzn\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453268 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvk8\" (UniqueName: \"kubernetes.io/projected/1cba9428-b26b-44ee-84c5-cac06ce86f4d-kube-api-access-jxvk8\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453296 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjmv\" (UniqueName: \"kubernetes.io/projected/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-kube-api-access-9jjmv\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453350 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj69n\" (UniqueName: \"kubernetes.io/projected/f48e99ea-198f-48d8-b2ef-83602d80118b-kube-api-access-cj69n\") pod \"nmstate-metrics-58c85c668d-btq8f\" (UID: \"f48e99ea-198f-48d8-b2ef-83602d80118b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453495 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-nmstate-lock\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453567 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-ovs-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-nmstate-lock\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453645 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-ovs-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.453769 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cba9428-b26b-44ee-84c5-cac06ce86f4d-dbus-socket\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.460454 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.470099 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bwzn\" (UniqueName: \"kubernetes.io/projected/c38ec25b-ac0c-4f99-a3c9-ca226d8aa544-kube-api-access-6bwzn\") pod \"nmstate-webhook-866bcb46dc-7xww2\" (UID: \"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.478822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvk8\" (UniqueName: \"kubernetes.io/projected/1cba9428-b26b-44ee-84c5-cac06ce86f4d-kube-api-access-jxvk8\") pod \"nmstate-handler-c8gqg\" (UID: \"1cba9428-b26b-44ee-84c5-cac06ce86f4d\") " pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.492280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj69n\" (UniqueName: \"kubernetes.io/projected/f48e99ea-198f-48d8-b2ef-83602d80118b-kube-api-access-cj69n\") pod \"nmstate-metrics-58c85c668d-btq8f\" (UID: \"f48e99ea-198f-48d8-b2ef-83602d80118b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.494893 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.511442 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.551905 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.554662 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.554771 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.554794 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fbdc896fc-8w4dp"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.554810 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jjmv\" (UniqueName: \"kubernetes.io/projected/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-kube-api-access-9jjmv\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.555474 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.555754 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.559495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.572262 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbdc896fc-8w4dp"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.579094 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jjmv\" (UniqueName: \"kubernetes.io/projected/d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3-kube-api-access-9jjmv\") pod \"nmstate-console-plugin-5c78fc5d65-6z999\" (UID: \"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.646699 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rss26\" (UniqueName: \"kubernetes.io/projected/8db3df9d-ba0e-4566-aae1-695189ef1282-kube-api-access-rss26\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667468 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-service-ca\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-oauth-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-trusted-ca-bundle\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667644 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-console-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.667724 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-oauth-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-trusted-ca-bundle\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-console-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-oauth-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768587 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rss26\" (UniqueName: \"kubernetes.io/projected/8db3df9d-ba0e-4566-aae1-695189ef1282-kube-api-access-rss26\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768640 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-service-ca\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.768667 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-oauth-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.769815 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-oauth-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.770225 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-trusted-ca-bundle\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.770369 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-console-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.771526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8db3df9d-ba0e-4566-aae1-695189ef1282-service-ca\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.773715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-oauth-config\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.776759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8db3df9d-ba0e-4566-aae1-695189ef1282-console-serving-cert\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.787116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rss26\" (UniqueName: \"kubernetes.io/projected/8db3df9d-ba0e-4566-aae1-695189ef1282-kube-api-access-rss26\") pod \"console-6fbdc896fc-8w4dp\" (UID: \"8db3df9d-ba0e-4566-aae1-695189ef1282\") " pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.896933 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.961616 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-btq8f"] Feb 19 08:34:07 crc kubenswrapper[4780]: I0219 08:34:07.994491 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2"] Feb 19 08:34:07 crc kubenswrapper[4780]: W0219 08:34:07.997526 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38ec25b_ac0c_4f99_a3c9_ca226d8aa544.slice/crio-3248071078b7b5f0b86c4eb56e212dcdf47ae8fe3ead82eb27dcec3eea391896 WatchSource:0}: Error finding container 3248071078b7b5f0b86c4eb56e212dcdf47ae8fe3ead82eb27dcec3eea391896: Status 404 returned error can't find the container with id 3248071078b7b5f0b86c4eb56e212dcdf47ae8fe3ead82eb27dcec3eea391896 Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.056695 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999"] Feb 19 08:34:08 crc kubenswrapper[4780]: W0219 08:34:08.058507 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9259cc4_9cb2_4f82_8c90_bf9ee6871fe3.slice/crio-2b425f3b5dbeb9d872f68d8b083536a662205fcb2e72abcc0ec43c6f185f3a9d WatchSource:0}: Error finding container 2b425f3b5dbeb9d872f68d8b083536a662205fcb2e72abcc0ec43c6f185f3a9d: Status 404 returned error can't find the container with id 2b425f3b5dbeb9d872f68d8b083536a662205fcb2e72abcc0ec43c6f185f3a9d Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.093600 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbdc896fc-8w4dp"] Feb 19 08:34:08 crc kubenswrapper[4780]: W0219 08:34:08.099524 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8db3df9d_ba0e_4566_aae1_695189ef1282.slice/crio-4e5084d1d45805090c893b5e1dcb0b025b087e1cd30071dca291e6e972fa0134 WatchSource:0}: Error finding container 4e5084d1d45805090c893b5e1dcb0b025b087e1cd30071dca291e6e972fa0134: Status 404 returned error can't find the container with id 4e5084d1d45805090c893b5e1dcb0b025b087e1cd30071dca291e6e972fa0134 Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.429956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbdc896fc-8w4dp" event={"ID":"8db3df9d-ba0e-4566-aae1-695189ef1282","Type":"ContainerStarted","Data":"c68488990f654838ed155ec75238c22468834afb52ffd9367e1226cd7248ce75"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.430033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbdc896fc-8w4dp" event={"ID":"8db3df9d-ba0e-4566-aae1-695189ef1282","Type":"ContainerStarted","Data":"4e5084d1d45805090c893b5e1dcb0b025b087e1cd30071dca291e6e972fa0134"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.432577 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" event={"ID":"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3","Type":"ContainerStarted","Data":"2b425f3b5dbeb9d872f68d8b083536a662205fcb2e72abcc0ec43c6f185f3a9d"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.433725 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" event={"ID":"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544","Type":"ContainerStarted","Data":"3248071078b7b5f0b86c4eb56e212dcdf47ae8fe3ead82eb27dcec3eea391896"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.434885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c8gqg" event={"ID":"1cba9428-b26b-44ee-84c5-cac06ce86f4d","Type":"ContainerStarted","Data":"72e1c92ae3e30d6fceba81281ed31ea264a9e52d509b8a21db9e2ba106149a4c"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.435578 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" event={"ID":"f48e99ea-198f-48d8-b2ef-83602d80118b","Type":"ContainerStarted","Data":"2f66a55f91ab33887cd982e2b8d5f0411b04493af34e5a2f31f1e96a3474a4f9"} Feb 19 08:34:08 crc kubenswrapper[4780]: I0219 08:34:08.452453 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fbdc896fc-8w4dp" podStartSLOduration=1.45239228 podStartE2EDuration="1.45239228s" podCreationTimestamp="2026-02-19 08:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:34:08.449647713 +0000 UTC m=+791.193305172" watchObservedRunningTime="2026-02-19 08:34:08.45239228 +0000 UTC m=+791.196049749" Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.446520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c8gqg" event={"ID":"1cba9428-b26b-44ee-84c5-cac06ce86f4d","Type":"ContainerStarted","Data":"37ae377a914045a8045b19b900eb392be46858f33613eacfb01fa5a0345c4ff6"} Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.448273 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.449191 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" event={"ID":"f48e99ea-198f-48d8-b2ef-83602d80118b","Type":"ContainerStarted","Data":"f769aeb9e48bc9f1487dd7e1de5beb9e4689f0e9d6d74a867b1aa5a428f810f0"} Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.451243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" event={"ID":"d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3","Type":"ContainerStarted","Data":"564d56f72ba67524efc7501c31d8f7ca7b413f32ff787eb031e7ea82178b3ff7"} Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.452458 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" event={"ID":"c38ec25b-ac0c-4f99-a3c9-ca226d8aa544","Type":"ContainerStarted","Data":"ce76573bbad47f9c681925cc8317bae6f55535e59f5c6f4df058440bebb59785"} Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.453277 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.467003 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-c8gqg" podStartSLOduration=1.025069539 podStartE2EDuration="3.466980942s" podCreationTimestamp="2026-02-19 08:34:07 +0000 UTC" firstStartedPulling="2026-02-19 08:34:07.600658828 +0000 UTC m=+790.344316277" lastFinishedPulling="2026-02-19 08:34:10.042570211 +0000 UTC m=+792.786227680" observedRunningTime="2026-02-19 08:34:10.464435689 +0000 UTC m=+793.208093138" watchObservedRunningTime="2026-02-19 08:34:10.466980942 +0000 UTC m=+793.210638391" Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.482981 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-6z999" podStartSLOduration=1.499089802 podStartE2EDuration="3.482947385s" podCreationTimestamp="2026-02-19 08:34:07 +0000 UTC" firstStartedPulling="2026-02-19 08:34:08.063556128 +0000 UTC m=+790.807213577" lastFinishedPulling="2026-02-19 08:34:10.047413711 +0000 UTC m=+792.791071160" observedRunningTime="2026-02-19 08:34:10.480996907 +0000 UTC m=+793.224654356" watchObservedRunningTime="2026-02-19 08:34:10.482947385 +0000 UTC m=+793.226604884" Feb 19 08:34:10 crc kubenswrapper[4780]: I0219 08:34:10.514951 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" podStartSLOduration=1.43578289 podStartE2EDuration="3.514924324s" podCreationTimestamp="2026-02-19 08:34:07 +0000 UTC" firstStartedPulling="2026-02-19 08:34:07.999450446 +0000 UTC m=+790.743107895" lastFinishedPulling="2026-02-19 08:34:10.07859187 +0000 UTC m=+792.822249329" observedRunningTime="2026-02-19 08:34:10.51030049 +0000 UTC m=+793.253958019" watchObservedRunningTime="2026-02-19 08:34:10.514924324 +0000 UTC m=+793.258581773" Feb 19 08:34:13 crc kubenswrapper[4780]: I0219 08:34:13.478461 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" event={"ID":"f48e99ea-198f-48d8-b2ef-83602d80118b","Type":"ContainerStarted","Data":"018b32a2f4f9b6fad07d6deb33d8419c30355fd465a0320dc191671f1a2b918d"} Feb 19 08:34:13 crc kubenswrapper[4780]: I0219 08:34:13.504872 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-btq8f" podStartSLOduration=1.591154634 podStartE2EDuration="6.504850907s" podCreationTimestamp="2026-02-19 08:34:07 +0000 UTC" firstStartedPulling="2026-02-19 08:34:07.959581132 +0000 UTC m=+790.703238611" lastFinishedPulling="2026-02-19 08:34:12.873277405 +0000 UTC m=+795.616934884" observedRunningTime="2026-02-19 08:34:13.501234447 +0000 UTC m=+796.244891926" watchObservedRunningTime="2026-02-19 08:34:13.504850907 +0000 UTC m=+796.248508366" Feb 19 08:34:17 crc kubenswrapper[4780]: I0219 08:34:17.592436 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-c8gqg" Feb 19 08:34:17 crc kubenswrapper[4780]: I0219 08:34:17.897465 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:17 crc kubenswrapper[4780]: I0219 08:34:17.897677 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:17 crc kubenswrapper[4780]: I0219 08:34:17.905948 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:18 crc kubenswrapper[4780]: I0219 08:34:18.514106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fbdc896fc-8w4dp" Feb 19 08:34:18 crc kubenswrapper[4780]: I0219 08:34:18.578042 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:34:27 crc kubenswrapper[4780]: I0219 08:34:27.520239 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7xww2" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.180900 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk"] Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.185256 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.187560 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.200318 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk"] Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.248743 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzr4\" (UniqueName: \"kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.248785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.248855 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.349761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.349819 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzr4\" (UniqueName: \"kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.349845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.350227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.350276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.368857 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzr4\" (UniqueName: \"kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.501035 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:41 crc kubenswrapper[4780]: I0219 08:34:41.686567 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk"] Feb 19 08:34:42 crc kubenswrapper[4780]: I0219 08:34:42.657832 4780 generic.go:334] "Generic (PLEG): container finished" podID="c1790072-7e58-47b3-8895-51dade41bbf1" containerID="7613982f87781637d76b58ba62e851fa8380a6f97b1a2f2cdfd8a701fed6a9cd" exitCode=0 Feb 19 08:34:42 crc kubenswrapper[4780]: I0219 08:34:42.657953 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" event={"ID":"c1790072-7e58-47b3-8895-51dade41bbf1","Type":"ContainerDied","Data":"7613982f87781637d76b58ba62e851fa8380a6f97b1a2f2cdfd8a701fed6a9cd"} Feb 19 08:34:42 crc kubenswrapper[4780]: I0219 08:34:42.658262 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" event={"ID":"c1790072-7e58-47b3-8895-51dade41bbf1","Type":"ContainerStarted","Data":"62df2f55169b534b1526165d5e668399de0a0f49b4674d6aed3d54ce40472b9a"} Feb 19 08:34:43 crc kubenswrapper[4780]: I0219 08:34:43.619031 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mzcjh" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" containerID="cri-o://e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92" gracePeriod=15 Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.075152 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mzcjh_28c08ad8-6d6d-4072-bf01-6aecd11b8bb9/console/0.log" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.075217 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186487 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186534 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186556 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186692 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186715 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n66xg\" (UniqueName: \"kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.186790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config\") pod \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\" (UID: \"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9\") " Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.187556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.187629 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.187660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca" (OuterVolumeSpecName: "service-ca") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.187714 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config" (OuterVolumeSpecName: "console-config") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.193727 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg" (OuterVolumeSpecName: "kube-api-access-n66xg") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "kube-api-access-n66xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.194973 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.195467 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" (UID: "28c08ad8-6d6d-4072-bf01-6aecd11b8bb9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288750 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288792 4780 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288805 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n66xg\" (UniqueName: \"kubernetes.io/projected/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-kube-api-access-n66xg\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288819 4780 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288832 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288843 4780 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.288854 4780 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.674021 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mzcjh_28c08ad8-6d6d-4072-bf01-6aecd11b8bb9/console/0.log" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.674077 4780 generic.go:334] "Generic (PLEG): container finished" podID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerID="e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92" exitCode=2 Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.674236 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mzcjh" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.674237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mzcjh" event={"ID":"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9","Type":"ContainerDied","Data":"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92"} Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.674634 4780 scope.go:117] "RemoveContainer" containerID="e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.675268 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mzcjh" event={"ID":"28c08ad8-6d6d-4072-bf01-6aecd11b8bb9","Type":"ContainerDied","Data":"fc013167f1612e51e7d1488a03a77972fda62f6f01b43c3deb4e923f8df5148b"} Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.676760 4780 generic.go:334] "Generic (PLEG): container finished" podID="c1790072-7e58-47b3-8895-51dade41bbf1" containerID="4801b7b44d6ebf8af32472ce5adfa18773dd20bc0f214d02b263654293203136" exitCode=0 Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.676939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" event={"ID":"c1790072-7e58-47b3-8895-51dade41bbf1","Type":"ContainerDied","Data":"4801b7b44d6ebf8af32472ce5adfa18773dd20bc0f214d02b263654293203136"} Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.725894 4780 scope.go:117] "RemoveContainer" containerID="e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92" Feb 19 08:34:44 crc kubenswrapper[4780]: E0219 08:34:44.726802 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92\": container with ID starting with e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92 not found: ID does not exist" containerID="e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.726844 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92"} err="failed to get container status \"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92\": rpc error: code = NotFound desc = could not find container \"e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92\": container with ID starting with e317b9a403b19784a720e2765b5021feb32d2e99b049a79538ffd6e8228fdb92 not found: ID does not exist" Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.728593 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:34:44 crc kubenswrapper[4780]: I0219 08:34:44.732068 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mzcjh"] Feb 19 08:34:45 crc kubenswrapper[4780]: I0219 08:34:45.689083 4780 generic.go:334] "Generic (PLEG): container finished" podID="c1790072-7e58-47b3-8895-51dade41bbf1" containerID="0b1ba1a3f9371ecb730a00d68d18e1abcf6ab79ec5543d05b59baa771bcd0168" exitCode=0 Feb 19 08:34:45 crc kubenswrapper[4780]: I0219 08:34:45.689179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" event={"ID":"c1790072-7e58-47b3-8895-51dade41bbf1","Type":"ContainerDied","Data":"0b1ba1a3f9371ecb730a00d68d18e1abcf6ab79ec5543d05b59baa771bcd0168"} Feb 19 08:34:45 crc kubenswrapper[4780]: I0219 08:34:45.950511 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" path="/var/lib/kubelet/pods/28c08ad8-6d6d-4072-bf01-6aecd11b8bb9/volumes" Feb 19 08:34:46 crc kubenswrapper[4780]: I0219 08:34:46.972072 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.126955 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle\") pod \"c1790072-7e58-47b3-8895-51dade41bbf1\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.127104 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") pod \"c1790072-7e58-47b3-8895-51dade41bbf1\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.127206 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzr4\" (UniqueName: \"kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4\") pod \"c1790072-7e58-47b3-8895-51dade41bbf1\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.129116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle" (OuterVolumeSpecName: "bundle") pod "c1790072-7e58-47b3-8895-51dade41bbf1" (UID: "c1790072-7e58-47b3-8895-51dade41bbf1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.146917 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4" (OuterVolumeSpecName: "kube-api-access-hqzr4") pod "c1790072-7e58-47b3-8895-51dade41bbf1" (UID: "c1790072-7e58-47b3-8895-51dade41bbf1"). InnerVolumeSpecName "kube-api-access-hqzr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.229473 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqzr4\" (UniqueName: \"kubernetes.io/projected/c1790072-7e58-47b3-8895-51dade41bbf1-kube-api-access-hqzr4\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.229535 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.329888 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util" (OuterVolumeSpecName: "util") pod "c1790072-7e58-47b3-8895-51dade41bbf1" (UID: "c1790072-7e58-47b3-8895-51dade41bbf1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.330271 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") pod \"c1790072-7e58-47b3-8895-51dade41bbf1\" (UID: \"c1790072-7e58-47b3-8895-51dade41bbf1\") " Feb 19 08:34:47 crc kubenswrapper[4780]: W0219 08:34:47.330456 4780 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c1790072-7e58-47b3-8895-51dade41bbf1/volumes/kubernetes.io~empty-dir/util Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.330493 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util" (OuterVolumeSpecName: "util") pod "c1790072-7e58-47b3-8895-51dade41bbf1" (UID: "c1790072-7e58-47b3-8895-51dade41bbf1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.330660 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1790072-7e58-47b3-8895-51dade41bbf1-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.706523 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" event={"ID":"c1790072-7e58-47b3-8895-51dade41bbf1","Type":"ContainerDied","Data":"62df2f55169b534b1526165d5e668399de0a0f49b4674d6aed3d54ce40472b9a"} Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.706562 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk" Feb 19 08:34:47 crc kubenswrapper[4780]: I0219 08:34:47.706575 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62df2f55169b534b1526165d5e668399de0a0f49b4674d6aed3d54ce40472b9a" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607019 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-595746788d-rtthh"] Feb 19 08:34:56 crc kubenswrapper[4780]: E0219 08:34:56.607585 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="pull" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607596 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="pull" Feb 19 08:34:56 crc kubenswrapper[4780]: E0219 08:34:56.607610 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="extract" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607616 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="extract" Feb 19 08:34:56 crc kubenswrapper[4780]: E0219 08:34:56.607628 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="util" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607634 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="util" Feb 19 08:34:56 crc kubenswrapper[4780]: E0219 08:34:56.607647 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607652 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607737 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1790072-7e58-47b3-8895-51dade41bbf1" containerName="extract" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.607746 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c08ad8-6d6d-4072-bf01-6aecd11b8bb9" containerName="console" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.608078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.610903 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-snzjv" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.611051 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.611051 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.611343 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.611923 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.636635 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-595746788d-rtthh"] Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.660958 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-apiservice-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.661011 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqf64\" (UniqueName: \"kubernetes.io/projected/cf9a966b-352f-4e6c-9b56-e44c711b07d7-kube-api-access-pqf64\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.661043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-webhook-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.761807 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqf64\" (UniqueName: \"kubernetes.io/projected/cf9a966b-352f-4e6c-9b56-e44c711b07d7-kube-api-access-pqf64\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.762064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-webhook-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.762205 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-apiservice-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.767172 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-webhook-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.774770 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf9a966b-352f-4e6c-9b56-e44c711b07d7-apiservice-cert\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.788556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqf64\" (UniqueName: \"kubernetes.io/projected/cf9a966b-352f-4e6c-9b56-e44c711b07d7-kube-api-access-pqf64\") pod \"metallb-operator-controller-manager-595746788d-rtthh\" (UID: \"cf9a966b-352f-4e6c-9b56-e44c711b07d7\") " pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.925835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.963094 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk"] Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.965811 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.978996 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.979363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.980789 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-t9mnh" Feb 19 08:34:56 crc kubenswrapper[4780]: I0219 08:34:56.984182 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk"] Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.159255 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-595746788d-rtthh"] Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.166230 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwf5c\" (UniqueName: \"kubernetes.io/projected/7f234297-fdfa-4fb9-89e2-0d970160c0a4-kube-api-access-fwf5c\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.166620 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-webhook-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.166643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-apiservice-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.267827 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwf5c\" (UniqueName: \"kubernetes.io/projected/7f234297-fdfa-4fb9-89e2-0d970160c0a4-kube-api-access-fwf5c\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.267883 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-webhook-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.267907 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-apiservice-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.275706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-webhook-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.276651 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f234297-fdfa-4fb9-89e2-0d970160c0a4-apiservice-cert\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.290064 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwf5c\" (UniqueName: \"kubernetes.io/projected/7f234297-fdfa-4fb9-89e2-0d970160c0a4-kube-api-access-fwf5c\") pod \"metallb-operator-webhook-server-5b5d8f86db-qcszk\" (UID: \"7f234297-fdfa-4fb9-89e2-0d970160c0a4\") " pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.300037 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.507067 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk"] Feb 19 08:34:57 crc kubenswrapper[4780]: W0219 08:34:57.516542 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f234297_fdfa_4fb9_89e2_0d970160c0a4.slice/crio-eb41c79bd09ff50b300c4f63ef944c49879ecf16aee0b53cde48f8b68b3b67de WatchSource:0}: Error finding container eb41c79bd09ff50b300c4f63ef944c49879ecf16aee0b53cde48f8b68b3b67de: Status 404 returned error can't find the container with id eb41c79bd09ff50b300c4f63ef944c49879ecf16aee0b53cde48f8b68b3b67de Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.776214 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" event={"ID":"cf9a966b-352f-4e6c-9b56-e44c711b07d7","Type":"ContainerStarted","Data":"3c633b2fd0b7622f7196cb721ee17a2834511ba8e2c7178f077b9d518b2def6b"} Feb 19 08:34:57 crc kubenswrapper[4780]: I0219 08:34:57.777617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" event={"ID":"7f234297-fdfa-4fb9-89e2-0d970160c0a4","Type":"ContainerStarted","Data":"eb41c79bd09ff50b300c4f63ef944c49879ecf16aee0b53cde48f8b68b3b67de"} Feb 19 08:35:00 crc kubenswrapper[4780]: I0219 08:35:00.811613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" event={"ID":"cf9a966b-352f-4e6c-9b56-e44c711b07d7","Type":"ContainerStarted","Data":"60aa2b05f1a0b5716d33d04602f80d99112101de5bb57189f3b1ec4d1f1fba27"} Feb 19 08:35:00 crc kubenswrapper[4780]: I0219 08:35:00.812261 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:35:00 crc kubenswrapper[4780]: I0219 08:35:00.836973 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" podStartSLOduration=2.297012325 podStartE2EDuration="4.836959016s" podCreationTimestamp="2026-02-19 08:34:56 +0000 UTC" firstStartedPulling="2026-02-19 08:34:57.178627304 +0000 UTC m=+839.922284753" lastFinishedPulling="2026-02-19 08:34:59.718573995 +0000 UTC m=+842.462231444" observedRunningTime="2026-02-19 08:35:00.835849129 +0000 UTC m=+843.579506578" watchObservedRunningTime="2026-02-19 08:35:00.836959016 +0000 UTC m=+843.580616465" Feb 19 08:35:01 crc kubenswrapper[4780]: I0219 08:35:01.819535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" event={"ID":"7f234297-fdfa-4fb9-89e2-0d970160c0a4","Type":"ContainerStarted","Data":"a43ed2aca4c342b45fc9b04c4d8c01b07f5208d0e661001244d0bd2f9e7ab50a"} Feb 19 08:35:01 crc kubenswrapper[4780]: I0219 08:35:01.821282 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:35:01 crc kubenswrapper[4780]: I0219 08:35:01.841433 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" podStartSLOduration=2.020658947 podStartE2EDuration="5.841416736s" podCreationTimestamp="2026-02-19 08:34:56 +0000 UTC" firstStartedPulling="2026-02-19 08:34:57.519712118 +0000 UTC m=+840.263369567" lastFinishedPulling="2026-02-19 08:35:01.340469917 +0000 UTC m=+844.084127356" observedRunningTime="2026-02-19 08:35:01.839090539 +0000 UTC m=+844.582747978" watchObservedRunningTime="2026-02-19 08:35:01.841416736 +0000 UTC m=+844.585074185" Feb 19 08:35:17 crc kubenswrapper[4780]: I0219 08:35:17.304946 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b5d8f86db-qcszk" Feb 19 08:35:36 crc kubenswrapper[4780]: I0219 08:35:36.336025 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:35:36 crc kubenswrapper[4780]: I0219 08:35:36.336564 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:35:36 crc kubenswrapper[4780]: I0219 08:35:36.929394 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-595746788d-rtthh" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.605289 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-txb2m"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.608308 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.611720 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.612590 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.624306 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.624530 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.624667 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tkmc9" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.624864 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.637152 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687792 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-reloader\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-conf\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687889 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5qp\" (UniqueName: \"kubernetes.io/projected/d67fd9e4-d9ed-457d-9b03-d226672c5e12-kube-api-access-bv5qp\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-sockets\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-startup\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687960 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2xs\" (UniqueName: \"kubernetes.io/projected/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-kube-api-access-qs2xs\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.687987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.688007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.709284 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-l2dzr"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.710414 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.712369 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.712548 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hfllg" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.712992 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.712994 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.725062 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-xq67r"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.726149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.730351 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.740898 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-xq67r"] Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/16814ff5-74e6-4366-b12c-683bd1a455d0-metallb-excludel2\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789560 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-cert\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789607 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-metrics-certs\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789625 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789650 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-reloader\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789679 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rsfr\" (UniqueName: \"kubernetes.io/projected/16814ff5-74e6-4366-b12c-683bd1a455d0-kube-api-access-2rsfr\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-conf\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5qp\" (UniqueName: \"kubernetes.io/projected/d67fd9e4-d9ed-457d-9b03-d226672c5e12-kube-api-access-bv5qp\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789751 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-sockets\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vr2s\" (UniqueName: \"kubernetes.io/projected/e3686843-4ebe-479a-9ed7-08aa57f7aa39-kube-api-access-5vr2s\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789795 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-metrics-certs\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-startup\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789848 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2xs\" (UniqueName: \"kubernetes.io/projected/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-kube-api-access-qs2xs\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789872 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.789994 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: E0219 08:35:37.790030 4780 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 08:35:37 crc kubenswrapper[4780]: E0219 08:35:37.790090 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs podName:d67fd9e4-d9ed-457d-9b03-d226672c5e12 nodeName:}" failed. No retries permitted until 2026-02-19 08:35:38.290070932 +0000 UTC m=+881.033728481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs") pod "frr-k8s-txb2m" (UID: "d67fd9e4-d9ed-457d-9b03-d226672c5e12") : secret "frr-k8s-certs-secret" not found Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.790237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-sockets\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.792386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-reloader\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.792533 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-conf\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.793176 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d67fd9e4-d9ed-457d-9b03-d226672c5e12-frr-startup\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.794897 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.808390 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2xs\" (UniqueName: \"kubernetes.io/projected/4cda5e67-a145-4db5-b4f7-5a0dca33ccd3-kube-api-access-qs2xs\") pod \"frr-k8s-webhook-server-78b44bf5bb-59pnp\" (UID: \"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.813351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5qp\" (UniqueName: \"kubernetes.io/projected/d67fd9e4-d9ed-457d-9b03-d226672c5e12-kube-api-access-bv5qp\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.890549 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/16814ff5-74e6-4366-b12c-683bd1a455d0-metallb-excludel2\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.890591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-cert\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.890628 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-metrics-certs\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.891219 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/16814ff5-74e6-4366-b12c-683bd1a455d0-metallb-excludel2\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.891266 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rsfr\" (UniqueName: \"kubernetes.io/projected/16814ff5-74e6-4366-b12c-683bd1a455d0-kube-api-access-2rsfr\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.891315 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vr2s\" (UniqueName: \"kubernetes.io/projected/e3686843-4ebe-479a-9ed7-08aa57f7aa39-kube-api-access-5vr2s\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.891336 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-metrics-certs\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.891375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: E0219 08:35:37.891454 4780 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 08:35:37 crc kubenswrapper[4780]: E0219 08:35:37.891530 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist podName:16814ff5-74e6-4366-b12c-683bd1a455d0 nodeName:}" failed. No retries permitted until 2026-02-19 08:35:38.391518905 +0000 UTC m=+881.135176354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist") pod "speaker-l2dzr" (UID: "16814ff5-74e6-4366-b12c-683bd1a455d0") : secret "metallb-memberlist" not found Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.892011 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.893988 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-metrics-certs\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.894077 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-metrics-certs\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.904656 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3686843-4ebe-479a-9ed7-08aa57f7aa39-cert\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.907262 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vr2s\" (UniqueName: \"kubernetes.io/projected/e3686843-4ebe-479a-9ed7-08aa57f7aa39-kube-api-access-5vr2s\") pod \"controller-69bbfbf88f-xq67r\" (UID: \"e3686843-4ebe-479a-9ed7-08aa57f7aa39\") " pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.910996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rsfr\" (UniqueName: \"kubernetes.io/projected/16814ff5-74e6-4366-b12c-683bd1a455d0-kube-api-access-2rsfr\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:37 crc kubenswrapper[4780]: I0219 08:35:37.948205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.041741 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.172553 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp"] Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.248968 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-xq67r"] Feb 19 08:35:38 crc kubenswrapper[4780]: W0219 08:35:38.253729 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3686843_4ebe_479a_9ed7_08aa57f7aa39.slice/crio-edaa64cdbb9cc63e2689b8b23985e7b1188c458207818a4e637537068a774319 WatchSource:0}: Error finding container edaa64cdbb9cc63e2689b8b23985e7b1188c458207818a4e637537068a774319: Status 404 returned error can't find the container with id edaa64cdbb9cc63e2689b8b23985e7b1188c458207818a4e637537068a774319 Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.296633 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.301973 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d67fd9e4-d9ed-457d-9b03-d226672c5e12-metrics-certs\") pod \"frr-k8s-txb2m\" (UID: \"d67fd9e4-d9ed-457d-9b03-d226672c5e12\") " pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.398093 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:38 crc kubenswrapper[4780]: E0219 08:35:38.398273 4780 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 08:35:38 crc kubenswrapper[4780]: E0219 08:35:38.398530 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist podName:16814ff5-74e6-4366-b12c-683bd1a455d0 nodeName:}" failed. No retries permitted until 2026-02-19 08:35:39.398512993 +0000 UTC m=+882.142170442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist") pod "speaker-l2dzr" (UID: "16814ff5-74e6-4366-b12c-683bd1a455d0") : secret "metallb-memberlist" not found Feb 19 08:35:38 crc kubenswrapper[4780]: I0219 08:35:38.532218 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.079997 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-xq67r" event={"ID":"e3686843-4ebe-479a-9ed7-08aa57f7aa39","Type":"ContainerStarted","Data":"e5cede794fe9de19d3cb64018026aa94fdae37ceb2d50e082a1d8d8f2513b755"} Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.080047 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-xq67r" event={"ID":"e3686843-4ebe-479a-9ed7-08aa57f7aa39","Type":"ContainerStarted","Data":"483067557edd8ff744bc9f7c1bf9aa6a6ca623621e43bd238185e198366d1cda"} Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.080060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-xq67r" event={"ID":"e3686843-4ebe-479a-9ed7-08aa57f7aa39","Type":"ContainerStarted","Data":"edaa64cdbb9cc63e2689b8b23985e7b1188c458207818a4e637537068a774319"} Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.080096 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.105638 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-xq67r" podStartSLOduration=2.105615537 podStartE2EDuration="2.105615537s" podCreationTimestamp="2026-02-19 08:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:35:39.102014298 +0000 UTC m=+881.845671747" watchObservedRunningTime="2026-02-19 08:35:39.105615537 +0000 UTC m=+881.849272986" Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.115926 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" event={"ID":"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3","Type":"ContainerStarted","Data":"fdd46946538d70ea1e7f3f093a8ddecc4e87525e63257775c5551fa8c3746058"} Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.116837 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"4160f35b1a8b8f752b530bd6a2c36f075aa333a729895f603a5aa77de2eaec65"} Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.412239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.418870 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/16814ff5-74e6-4366-b12c-683bd1a455d0-memberlist\") pod \"speaker-l2dzr\" (UID: \"16814ff5-74e6-4366-b12c-683bd1a455d0\") " pod="metallb-system/speaker-l2dzr" Feb 19 08:35:39 crc kubenswrapper[4780]: I0219 08:35:39.527370 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2dzr" Feb 19 08:35:39 crc kubenswrapper[4780]: W0219 08:35:39.571809 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16814ff5_74e6_4366_b12c_683bd1a455d0.slice/crio-9e03c3f5a5240469826ef3df8e7abf811a7634e0eb0ad6410fa5505cfd358b01 WatchSource:0}: Error finding container 9e03c3f5a5240469826ef3df8e7abf811a7634e0eb0ad6410fa5505cfd358b01: Status 404 returned error can't find the container with id 9e03c3f5a5240469826ef3df8e7abf811a7634e0eb0ad6410fa5505cfd358b01 Feb 19 08:35:40 crc kubenswrapper[4780]: I0219 08:35:40.125644 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2dzr" event={"ID":"16814ff5-74e6-4366-b12c-683bd1a455d0","Type":"ContainerStarted","Data":"2790406625a457723e42d27b687ef959f2f72bcb506ec965c6464d5deb5d5e7f"} Feb 19 08:35:40 crc kubenswrapper[4780]: I0219 08:35:40.126062 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2dzr" event={"ID":"16814ff5-74e6-4366-b12c-683bd1a455d0","Type":"ContainerStarted","Data":"9e03c3f5a5240469826ef3df8e7abf811a7634e0eb0ad6410fa5505cfd358b01"} Feb 19 08:35:41 crc kubenswrapper[4780]: I0219 08:35:41.138143 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2dzr" event={"ID":"16814ff5-74e6-4366-b12c-683bd1a455d0","Type":"ContainerStarted","Data":"5a64159834329fb89103b967531ce61b761f22e8ec2da6a329d68c21ae6cfc62"} Feb 19 08:35:41 crc kubenswrapper[4780]: I0219 08:35:41.139138 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-l2dzr" Feb 19 08:35:41 crc kubenswrapper[4780]: I0219 08:35:41.157762 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-l2dzr" podStartSLOduration=4.157746294 podStartE2EDuration="4.157746294s" podCreationTimestamp="2026-02-19 08:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:35:41.152962216 +0000 UTC m=+883.896619685" watchObservedRunningTime="2026-02-19 08:35:41.157746294 +0000 UTC m=+883.901403743" Feb 19 08:35:45 crc kubenswrapper[4780]: I0219 08:35:45.166676 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" event={"ID":"4cda5e67-a145-4db5-b4f7-5a0dca33ccd3","Type":"ContainerStarted","Data":"93eebdefcbc0d64fd27ae0ec780423ea98536427195cc63deca49bc95bc9b638"} Feb 19 08:35:45 crc kubenswrapper[4780]: I0219 08:35:45.167290 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:45 crc kubenswrapper[4780]: I0219 08:35:45.170733 4780 generic.go:334] "Generic (PLEG): container finished" podID="d67fd9e4-d9ed-457d-9b03-d226672c5e12" containerID="3c4528f427ec08e256533a8358644d10399c6c86aa509cf1fafb7c6834f9a00d" exitCode=0 Feb 19 08:35:45 crc kubenswrapper[4780]: I0219 08:35:45.170810 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerDied","Data":"3c4528f427ec08e256533a8358644d10399c6c86aa509cf1fafb7c6834f9a00d"} Feb 19 08:35:45 crc kubenswrapper[4780]: I0219 08:35:45.219622 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" podStartSLOduration=1.583341672 podStartE2EDuration="8.219602891s" podCreationTimestamp="2026-02-19 08:35:37 +0000 UTC" firstStartedPulling="2026-02-19 08:35:38.18803269 +0000 UTC m=+880.931690139" lastFinishedPulling="2026-02-19 08:35:44.824293909 +0000 UTC m=+887.567951358" observedRunningTime="2026-02-19 08:35:45.181742577 +0000 UTC m=+887.925400036" watchObservedRunningTime="2026-02-19 08:35:45.219602891 +0000 UTC m=+887.963260350" Feb 19 08:35:46 crc kubenswrapper[4780]: I0219 08:35:46.180763 4780 generic.go:334] "Generic (PLEG): container finished" podID="d67fd9e4-d9ed-457d-9b03-d226672c5e12" containerID="95549bdde9b8cd8252a31e945f5e8dba3337aef4bd162e378142cdfc9f1debe4" exitCode=0 Feb 19 08:35:46 crc kubenswrapper[4780]: I0219 08:35:46.180899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerDied","Data":"95549bdde9b8cd8252a31e945f5e8dba3337aef4bd162e378142cdfc9f1debe4"} Feb 19 08:35:47 crc kubenswrapper[4780]: I0219 08:35:47.189910 4780 generic.go:334] "Generic (PLEG): container finished" podID="d67fd9e4-d9ed-457d-9b03-d226672c5e12" containerID="9426174cd528789001f24c5d7b4a9d1e95ea4f37bfad6a8ab3bb8ac0771ce30e" exitCode=0 Feb 19 08:35:47 crc kubenswrapper[4780]: I0219 08:35:47.189964 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerDied","Data":"9426174cd528789001f24c5d7b4a9d1e95ea4f37bfad6a8ab3bb8ac0771ce30e"} Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.048154 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-xq67r" Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.200989 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"9a6150320841f0da57f00f3259bc26498cf6926ea336d3a0f590c3f9cffe0293"} Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.201025 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"e50da68b7c0be3bd69b0043a6b6d5ca4e8e8ca6a93ef1939d0b14d5d43ab1f35"} Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.201036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"ede95264c61493876561edbe87aa169b34c84848a9288967fb26efe68a1f0692"} Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.201045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"a3e36f5ca664699f3ba27ed8f4e202a4b954e1955054c05506736e41f1e47474"} Feb 19 08:35:48 crc kubenswrapper[4780]: I0219 08:35:48.201054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"d09283a300f54b1c840eae75bb535388cf36d92cd1b3c68a077907d12d7784a0"} Feb 19 08:35:49 crc kubenswrapper[4780]: I0219 08:35:49.219528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-txb2m" event={"ID":"d67fd9e4-d9ed-457d-9b03-d226672c5e12","Type":"ContainerStarted","Data":"f9ba721d8cf257fcd5bbc54c13714de7561e0aa0163f1549d5895624bbd9aea9"} Feb 19 08:35:49 crc kubenswrapper[4780]: I0219 08:35:49.531958 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-l2dzr" Feb 19 08:35:50 crc kubenswrapper[4780]: I0219 08:35:50.229631 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:50 crc kubenswrapper[4780]: I0219 08:35:50.253886 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-txb2m" podStartSLOduration=7.091341268 podStartE2EDuration="13.253865579s" podCreationTimestamp="2026-02-19 08:35:37 +0000 UTC" firstStartedPulling="2026-02-19 08:35:38.677889255 +0000 UTC m=+881.421546704" lastFinishedPulling="2026-02-19 08:35:44.840413556 +0000 UTC m=+887.584071015" observedRunningTime="2026-02-19 08:35:50.250594058 +0000 UTC m=+892.994251517" watchObservedRunningTime="2026-02-19 08:35:50.253865579 +0000 UTC m=+892.997523038" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.071820 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn"] Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.073205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.075938 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.083068 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn"] Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.211397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.211582 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.211655 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pk2g\" (UniqueName: \"kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.312888 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.312953 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pk2g\" (UniqueName: \"kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.313025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.313473 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.313681 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.335149 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pk2g\" (UniqueName: \"kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.388023 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:51 crc kubenswrapper[4780]: I0219 08:35:51.807015 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn"] Feb 19 08:35:52 crc kubenswrapper[4780]: I0219 08:35:52.243934 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerStarted","Data":"6c24977c7679bfdcc9f4c74ed178b37b0793cc4307211db34ac18ec874a9d87b"} Feb 19 08:35:52 crc kubenswrapper[4780]: I0219 08:35:52.244336 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerStarted","Data":"b91b06444bf5c34cbbf4df12badceefbc05e7bf68c1a8dd467f38d16fdc03384"} Feb 19 08:35:53 crc kubenswrapper[4780]: I0219 08:35:53.251181 4780 generic.go:334] "Generic (PLEG): container finished" podID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerID="6c24977c7679bfdcc9f4c74ed178b37b0793cc4307211db34ac18ec874a9d87b" exitCode=0 Feb 19 08:35:53 crc kubenswrapper[4780]: I0219 08:35:53.251238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerDied","Data":"6c24977c7679bfdcc9f4c74ed178b37b0793cc4307211db34ac18ec874a9d87b"} Feb 19 08:35:53 crc kubenswrapper[4780]: I0219 08:35:53.532914 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:53 crc kubenswrapper[4780]: I0219 08:35:53.586073 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:56 crc kubenswrapper[4780]: I0219 08:35:56.277396 4780 generic.go:334] "Generic (PLEG): container finished" podID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerID="dae13d69a14a159d5d1a51be560b1159a837d0f9215733129c1070d3e7e82ca3" exitCode=0 Feb 19 08:35:56 crc kubenswrapper[4780]: I0219 08:35:56.277441 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerDied","Data":"dae13d69a14a159d5d1a51be560b1159a837d0f9215733129c1070d3e7e82ca3"} Feb 19 08:35:57 crc kubenswrapper[4780]: I0219 08:35:57.287155 4780 generic.go:334] "Generic (PLEG): container finished" podID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerID="0eb81a5a7ec1fc829678033066e730a6430454c22d5aa7bbc837ed191751888b" exitCode=0 Feb 19 08:35:57 crc kubenswrapper[4780]: I0219 08:35:57.287261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerDied","Data":"0eb81a5a7ec1fc829678033066e730a6430454c22d5aa7bbc837ed191751888b"} Feb 19 08:35:57 crc kubenswrapper[4780]: I0219 08:35:57.953639 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-59pnp" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.535637 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-txb2m" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.632665 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.711233 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle\") pod \"a99ca6ec-4c17-45ff-b344-70e65b475774\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.711298 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pk2g\" (UniqueName: \"kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g\") pod \"a99ca6ec-4c17-45ff-b344-70e65b475774\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.711344 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util\") pod \"a99ca6ec-4c17-45ff-b344-70e65b475774\" (UID: \"a99ca6ec-4c17-45ff-b344-70e65b475774\") " Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.712418 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle" (OuterVolumeSpecName: "bundle") pod "a99ca6ec-4c17-45ff-b344-70e65b475774" (UID: "a99ca6ec-4c17-45ff-b344-70e65b475774"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.716320 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g" (OuterVolumeSpecName: "kube-api-access-8pk2g") pod "a99ca6ec-4c17-45ff-b344-70e65b475774" (UID: "a99ca6ec-4c17-45ff-b344-70e65b475774"). InnerVolumeSpecName "kube-api-access-8pk2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.725337 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util" (OuterVolumeSpecName: "util") pod "a99ca6ec-4c17-45ff-b344-70e65b475774" (UID: "a99ca6ec-4c17-45ff-b344-70e65b475774"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.812515 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.812552 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pk2g\" (UniqueName: \"kubernetes.io/projected/a99ca6ec-4c17-45ff-b344-70e65b475774-kube-api-access-8pk2g\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:58 crc kubenswrapper[4780]: I0219 08:35:58.812569 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99ca6ec-4c17-45ff-b344-70e65b475774-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:59 crc kubenswrapper[4780]: I0219 08:35:59.304462 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" event={"ID":"a99ca6ec-4c17-45ff-b344-70e65b475774","Type":"ContainerDied","Data":"b91b06444bf5c34cbbf4df12badceefbc05e7bf68c1a8dd467f38d16fdc03384"} Feb 19 08:35:59 crc kubenswrapper[4780]: I0219 08:35:59.304771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn" Feb 19 08:35:59 crc kubenswrapper[4780]: I0219 08:35:59.304795 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b91b06444bf5c34cbbf4df12badceefbc05e7bf68c1a8dd467f38d16fdc03384" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.719077 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq"] Feb 19 08:36:03 crc kubenswrapper[4780]: E0219 08:36:03.719879 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="extract" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.719896 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="extract" Feb 19 08:36:03 crc kubenswrapper[4780]: E0219 08:36:03.719907 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="util" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.719914 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="util" Feb 19 08:36:03 crc kubenswrapper[4780]: E0219 08:36:03.719929 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="pull" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.719938 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="pull" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.720079 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99ca6ec-4c17-45ff-b344-70e65b475774" containerName="extract" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.720613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.723270 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-drxms" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.724263 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.724872 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.737362 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq"] Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.787009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d5aa745-f3e8-42e1-b220-0732129a4515-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.787184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9shc\" (UniqueName: \"kubernetes.io/projected/6d5aa745-f3e8-42e1-b220-0732129a4515-kube-api-access-c9shc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.889388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d5aa745-f3e8-42e1-b220-0732129a4515-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.889489 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9shc\" (UniqueName: \"kubernetes.io/projected/6d5aa745-f3e8-42e1-b220-0732129a4515-kube-api-access-c9shc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.889893 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d5aa745-f3e8-42e1-b220-0732129a4515-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:03 crc kubenswrapper[4780]: I0219 08:36:03.913306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9shc\" (UniqueName: \"kubernetes.io/projected/6d5aa745-f3e8-42e1-b220-0732129a4515-kube-api-access-c9shc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9rjgq\" (UID: \"6d5aa745-f3e8-42e1-b220-0732129a4515\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:04 crc kubenswrapper[4780]: I0219 08:36:04.043644 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" Feb 19 08:36:04 crc kubenswrapper[4780]: I0219 08:36:04.307195 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq"] Feb 19 08:36:04 crc kubenswrapper[4780]: W0219 08:36:04.312919 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d5aa745_f3e8_42e1_b220_0732129a4515.slice/crio-2cffb84ff301e0ee7bcaa7a9aa598b92ec44d3a777ca17bea0006caffb1d1b8f WatchSource:0}: Error finding container 2cffb84ff301e0ee7bcaa7a9aa598b92ec44d3a777ca17bea0006caffb1d1b8f: Status 404 returned error can't find the container with id 2cffb84ff301e0ee7bcaa7a9aa598b92ec44d3a777ca17bea0006caffb1d1b8f Feb 19 08:36:04 crc kubenswrapper[4780]: I0219 08:36:04.337288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" event={"ID":"6d5aa745-f3e8-42e1-b220-0732129a4515","Type":"ContainerStarted","Data":"2cffb84ff301e0ee7bcaa7a9aa598b92ec44d3a777ca17bea0006caffb1d1b8f"} Feb 19 08:36:06 crc kubenswrapper[4780]: I0219 08:36:06.336473 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:36:06 crc kubenswrapper[4780]: I0219 08:36:06.336766 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:36:07 crc kubenswrapper[4780]: I0219 08:36:07.354036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" event={"ID":"6d5aa745-f3e8-42e1-b220-0732129a4515","Type":"ContainerStarted","Data":"b542837e4b58f18e13f3a9e15473dcd5b215f7155e7fa68e913a3f046e5c992e"} Feb 19 08:36:07 crc kubenswrapper[4780]: I0219 08:36:07.382085 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9rjgq" podStartSLOduration=2.121281634 podStartE2EDuration="4.382062308s" podCreationTimestamp="2026-02-19 08:36:03 +0000 UTC" firstStartedPulling="2026-02-19 08:36:04.315086044 +0000 UTC m=+907.058743493" lastFinishedPulling="2026-02-19 08:36:06.575866718 +0000 UTC m=+909.319524167" observedRunningTime="2026-02-19 08:36:07.37932139 +0000 UTC m=+910.122978839" watchObservedRunningTime="2026-02-19 08:36:07.382062308 +0000 UTC m=+910.125719777" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.494429 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8r6cz"] Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.495378 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.498945 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wlr29" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.499595 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.499699 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.523093 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8r6cz"] Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.567913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82489\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-kube-api-access-82489\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.567975 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.669643 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82489\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-kube-api-access-82489\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.669736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.688802 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.696893 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82489\" (UniqueName: \"kubernetes.io/projected/70c8faf1-2778-49cb-a695-9a23b3df8652-kube-api-access-82489\") pod \"cert-manager-webhook-6888856db4-8r6cz\" (UID: \"70c8faf1-2778-49cb-a695-9a23b3df8652\") " pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:10 crc kubenswrapper[4780]: I0219 08:36:10.812659 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:11 crc kubenswrapper[4780]: I0219 08:36:11.059355 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8r6cz"] Feb 19 08:36:11 crc kubenswrapper[4780]: I0219 08:36:11.375408 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" event={"ID":"70c8faf1-2778-49cb-a695-9a23b3df8652","Type":"ContainerStarted","Data":"fa59abb238ec8c26b37fa891ed9f197d76bdcfc0e167eaa47d5b502f7681e567"} Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.021604 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hfz58"] Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.022652 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.024505 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hqpsl" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.030920 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hfz58"] Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.117991 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wsm\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-kube-api-access-r5wsm\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.118323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.219546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.219650 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wsm\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-kube-api-access-r5wsm\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.237874 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.241906 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wsm\" (UniqueName: \"kubernetes.io/projected/8e39e1f9-ed95-4bc3-8ab3-c786da63825c-kube-api-access-r5wsm\") pod \"cert-manager-cainjector-5545bd876-hfz58\" (UID: \"8e39e1f9-ed95-4bc3-8ab3-c786da63825c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:14 crc kubenswrapper[4780]: I0219 08:36:14.352837 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" Feb 19 08:36:15 crc kubenswrapper[4780]: I0219 08:36:15.568477 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hfz58"] Feb 19 08:36:15 crc kubenswrapper[4780]: W0219 08:36:15.575504 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e39e1f9_ed95_4bc3_8ab3_c786da63825c.slice/crio-2d2fa1f0ec99660587a55c4f1db8bb4b3407da01e2b6c161301db95a8e860412 WatchSource:0}: Error finding container 2d2fa1f0ec99660587a55c4f1db8bb4b3407da01e2b6c161301db95a8e860412: Status 404 returned error can't find the container with id 2d2fa1f0ec99660587a55c4f1db8bb4b3407da01e2b6c161301db95a8e860412 Feb 19 08:36:16 crc kubenswrapper[4780]: I0219 08:36:16.414282 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" event={"ID":"8e39e1f9-ed95-4bc3-8ab3-c786da63825c","Type":"ContainerStarted","Data":"e12ea10459706780c8dfa64bde808174305fa544850d81a69042c9a419c71748"} Feb 19 08:36:16 crc kubenswrapper[4780]: I0219 08:36:16.414558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" event={"ID":"8e39e1f9-ed95-4bc3-8ab3-c786da63825c","Type":"ContainerStarted","Data":"2d2fa1f0ec99660587a55c4f1db8bb4b3407da01e2b6c161301db95a8e860412"} Feb 19 08:36:16 crc kubenswrapper[4780]: I0219 08:36:16.417697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" event={"ID":"70c8faf1-2778-49cb-a695-9a23b3df8652","Type":"ContainerStarted","Data":"9df9688024d43045f1052a8505c3a8f7f6dc82dbe5e50b82e9004ed54107cf16"} Feb 19 08:36:16 crc kubenswrapper[4780]: I0219 08:36:16.417825 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:16 crc kubenswrapper[4780]: I0219 08:36:16.440840 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-hfz58" podStartSLOduration=3.440810869 podStartE2EDuration="3.440810869s" podCreationTimestamp="2026-02-19 08:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:36:16.436351329 +0000 UTC m=+919.180008768" watchObservedRunningTime="2026-02-19 08:36:16.440810869 +0000 UTC m=+919.184468328" Feb 19 08:36:20 crc kubenswrapper[4780]: I0219 08:36:20.817949 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" Feb 19 08:36:20 crc kubenswrapper[4780]: I0219 08:36:20.843668 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8r6cz" podStartSLOduration=6.7006540789999995 podStartE2EDuration="10.843635108s" podCreationTimestamp="2026-02-19 08:36:10 +0000 UTC" firstStartedPulling="2026-02-19 08:36:11.079689949 +0000 UTC m=+913.823347398" lastFinishedPulling="2026-02-19 08:36:15.222670978 +0000 UTC m=+917.966328427" observedRunningTime="2026-02-19 08:36:16.475414033 +0000 UTC m=+919.219071482" watchObservedRunningTime="2026-02-19 08:36:20.843635108 +0000 UTC m=+923.587292607" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.510350 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-27s7m"] Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.511371 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.515455 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6bdmt" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.522148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6klz9\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-kube-api-access-6klz9\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.522288 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-bound-sa-token\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.536623 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-27s7m"] Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.623200 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-bound-sa-token\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.623291 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6klz9\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-kube-api-access-6klz9\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.658370 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6klz9\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-kube-api-access-6klz9\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.659226 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fd4d870-a923-461c-bfa8-c9bfe0d87c1a-bound-sa-token\") pod \"cert-manager-545d4d4674-27s7m\" (UID: \"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a\") " pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:21 crc kubenswrapper[4780]: I0219 08:36:21.855072 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-27s7m" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.082708 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-27s7m"] Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.465089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-27s7m" event={"ID":"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a","Type":"ContainerStarted","Data":"42ddd7566d07a7c38cad6f52f6fbbc768021110a1e6c2ac4d7b7d381250fd9e3"} Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.465480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-27s7m" event={"ID":"0fd4d870-a923-461c-bfa8-c9bfe0d87c1a","Type":"ContainerStarted","Data":"1efd01261bbb75da2a48495c49d353dd65559a005beba1f1409e5138234d47a8"} Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.486939 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-27s7m" podStartSLOduration=1.486878168 podStartE2EDuration="1.486878168s" podCreationTimestamp="2026-02-19 08:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:36:22.4865535 +0000 UTC m=+925.230210989" watchObservedRunningTime="2026-02-19 08:36:22.486878168 +0000 UTC m=+925.230535657" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.535241 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.537758 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.621945 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.635997 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.636109 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszvh\" (UniqueName: \"kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.636187 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.736881 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.736942 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hszvh\" (UniqueName: \"kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.736970 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.737437 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.737472 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.773073 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hszvh\" (UniqueName: \"kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh\") pod \"certified-operators-s8fvm\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:22 crc kubenswrapper[4780]: I0219 08:36:22.860189 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:23 crc kubenswrapper[4780]: I0219 08:36:23.096872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:23 crc kubenswrapper[4780]: I0219 08:36:23.478282 4780 generic.go:334] "Generic (PLEG): container finished" podID="72b50492-adc4-475d-a97f-66f0841c56b4" containerID="ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0" exitCode=0 Feb 19 08:36:23 crc kubenswrapper[4780]: I0219 08:36:23.478380 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerDied","Data":"ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0"} Feb 19 08:36:23 crc kubenswrapper[4780]: I0219 08:36:23.478657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerStarted","Data":"4c268cc3ffdd1ae192c747b174d78464c56a9895e5a5c1da15d3205298fff010"} Feb 19 08:36:24 crc kubenswrapper[4780]: I0219 08:36:24.486199 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerStarted","Data":"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7"} Feb 19 08:36:25 crc kubenswrapper[4780]: I0219 08:36:25.494487 4780 generic.go:334] "Generic (PLEG): container finished" podID="72b50492-adc4-475d-a97f-66f0841c56b4" containerID="84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7" exitCode=0 Feb 19 08:36:25 crc kubenswrapper[4780]: I0219 08:36:25.494533 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerDied","Data":"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7"} Feb 19 08:36:26 crc kubenswrapper[4780]: I0219 08:36:26.504355 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerStarted","Data":"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001"} Feb 19 08:36:26 crc kubenswrapper[4780]: I0219 08:36:26.530539 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s8fvm" podStartSLOduration=2.10994167 podStartE2EDuration="4.530525447s" podCreationTimestamp="2026-02-19 08:36:22 +0000 UTC" firstStartedPulling="2026-02-19 08:36:23.479524877 +0000 UTC m=+926.223182326" lastFinishedPulling="2026-02-19 08:36:25.900108654 +0000 UTC m=+928.643766103" observedRunningTime="2026-02-19 08:36:26.527383649 +0000 UTC m=+929.271041098" watchObservedRunningTime="2026-02-19 08:36:26.530525447 +0000 UTC m=+929.274182896" Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.895835 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g4spf"] Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.897177 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.899254 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-n596s" Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.899908 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.900438 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 08:36:28 crc kubenswrapper[4780]: I0219 08:36:28.913746 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4spf"] Feb 19 08:36:29 crc kubenswrapper[4780]: I0219 08:36:29.007287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtp8\" (UniqueName: \"kubernetes.io/projected/f7592074-215c-43d2-aa60-edaf6a5a141c-kube-api-access-lmtp8\") pod \"openstack-operator-index-g4spf\" (UID: \"f7592074-215c-43d2-aa60-edaf6a5a141c\") " pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:29 crc kubenswrapper[4780]: I0219 08:36:29.109347 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtp8\" (UniqueName: \"kubernetes.io/projected/f7592074-215c-43d2-aa60-edaf6a5a141c-kube-api-access-lmtp8\") pod \"openstack-operator-index-g4spf\" (UID: \"f7592074-215c-43d2-aa60-edaf6a5a141c\") " pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:29 crc kubenswrapper[4780]: I0219 08:36:29.143061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtp8\" (UniqueName: \"kubernetes.io/projected/f7592074-215c-43d2-aa60-edaf6a5a141c-kube-api-access-lmtp8\") pod \"openstack-operator-index-g4spf\" (UID: \"f7592074-215c-43d2-aa60-edaf6a5a141c\") " pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:29 crc kubenswrapper[4780]: I0219 08:36:29.220788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:29 crc kubenswrapper[4780]: I0219 08:36:29.683730 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4spf"] Feb 19 08:36:30 crc kubenswrapper[4780]: I0219 08:36:30.531911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4spf" event={"ID":"f7592074-215c-43d2-aa60-edaf6a5a141c","Type":"ContainerStarted","Data":"9edc0afe23abf192cced08ea2e955bb2e4ac37d102eb0a9b530b30f283213a43"} Feb 19 08:36:32 crc kubenswrapper[4780]: I0219 08:36:32.860950 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:32 crc kubenswrapper[4780]: I0219 08:36:32.861321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:32 crc kubenswrapper[4780]: I0219 08:36:32.907504 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:33 crc kubenswrapper[4780]: I0219 08:36:33.579399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4spf" event={"ID":"f7592074-215c-43d2-aa60-edaf6a5a141c","Type":"ContainerStarted","Data":"3ccd4107e9cd121d86ab5b236b8a69d0fe9b56f91d57e21a314666cad2d87fb6"} Feb 19 08:36:33 crc kubenswrapper[4780]: I0219 08:36:33.625486 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:33 crc kubenswrapper[4780]: I0219 08:36:33.654920 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g4spf" podStartSLOduration=2.428133942 podStartE2EDuration="5.654894877s" podCreationTimestamp="2026-02-19 08:36:28 +0000 UTC" firstStartedPulling="2026-02-19 08:36:29.695344234 +0000 UTC m=+932.439001683" lastFinishedPulling="2026-02-19 08:36:32.922105159 +0000 UTC m=+935.665762618" observedRunningTime="2026-02-19 08:36:33.599103021 +0000 UTC m=+936.342760470" watchObservedRunningTime="2026-02-19 08:36:33.654894877 +0000 UTC m=+936.398552326" Feb 19 08:36:34 crc kubenswrapper[4780]: I0219 08:36:34.478305 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:35 crc kubenswrapper[4780]: I0219 08:36:35.590383 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s8fvm" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="registry-server" containerID="cri-o://cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001" gracePeriod=2 Feb 19 08:36:36 crc kubenswrapper[4780]: I0219 08:36:36.335977 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:36:36 crc kubenswrapper[4780]: I0219 08:36:36.336289 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:36:36 crc kubenswrapper[4780]: I0219 08:36:36.336332 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:36:36 crc kubenswrapper[4780]: I0219 08:36:36.336897 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:36:36 crc kubenswrapper[4780]: I0219 08:36:36.336968 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525" gracePeriod=600 Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.305771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.424822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content\") pod \"72b50492-adc4-475d-a97f-66f0841c56b4\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.425047 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hszvh\" (UniqueName: \"kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh\") pod \"72b50492-adc4-475d-a97f-66f0841c56b4\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.425071 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities\") pod \"72b50492-adc4-475d-a97f-66f0841c56b4\" (UID: \"72b50492-adc4-475d-a97f-66f0841c56b4\") " Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.425844 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities" (OuterVolumeSpecName: "utilities") pod "72b50492-adc4-475d-a97f-66f0841c56b4" (UID: "72b50492-adc4-475d-a97f-66f0841c56b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.440380 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh" (OuterVolumeSpecName: "kube-api-access-hszvh") pod "72b50492-adc4-475d-a97f-66f0841c56b4" (UID: "72b50492-adc4-475d-a97f-66f0841c56b4"). InnerVolumeSpecName "kube-api-access-hszvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.476449 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72b50492-adc4-475d-a97f-66f0841c56b4" (UID: "72b50492-adc4-475d-a97f-66f0841c56b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.526740 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.526768 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hszvh\" (UniqueName: \"kubernetes.io/projected/72b50492-adc4-475d-a97f-66f0841c56b4-kube-api-access-hszvh\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.526779 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72b50492-adc4-475d-a97f-66f0841c56b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.604113 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525" exitCode=0 Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.604214 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525"} Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.604240 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504"} Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.604258 4780 scope.go:117] "RemoveContainer" containerID="84ed25dcb2239f2ba5f5ca3fb35c26e2541ae4e71e20ce55bb0fede65e97942c" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.608033 4780 generic.go:334] "Generic (PLEG): container finished" podID="72b50492-adc4-475d-a97f-66f0841c56b4" containerID="cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001" exitCode=0 Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.608092 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerDied","Data":"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001"} Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.608168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8fvm" event={"ID":"72b50492-adc4-475d-a97f-66f0841c56b4","Type":"ContainerDied","Data":"4c268cc3ffdd1ae192c747b174d78464c56a9895e5a5c1da15d3205298fff010"} Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.608173 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8fvm" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.646406 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.646645 4780 scope.go:117] "RemoveContainer" containerID="cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.652681 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s8fvm"] Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.664221 4780 scope.go:117] "RemoveContainer" containerID="84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.683431 4780 scope.go:117] "RemoveContainer" containerID="ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.717353 4780 scope.go:117] "RemoveContainer" containerID="cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001" Feb 19 08:36:37 crc kubenswrapper[4780]: E0219 08:36:37.719456 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001\": container with ID starting with cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001 not found: ID does not exist" containerID="cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.719490 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001"} err="failed to get container status \"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001\": rpc error: code = NotFound desc = could not find container \"cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001\": container with ID starting with cb7fd20ed1127093085a82c1124087db7b6c96c62640464733cc8229ad058001 not found: ID does not exist" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.719517 4780 scope.go:117] "RemoveContainer" containerID="84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7" Feb 19 08:36:37 crc kubenswrapper[4780]: E0219 08:36:37.719918 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7\": container with ID starting with 84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7 not found: ID does not exist" containerID="84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.719942 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7"} err="failed to get container status \"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7\": rpc error: code = NotFound desc = could not find container \"84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7\": container with ID starting with 84d8f6e2d39a7eae09b5a61b5c90c7f24795dcfce94e04f68867a8aacc3a66d7 not found: ID does not exist" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.719962 4780 scope.go:117] "RemoveContainer" containerID="ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0" Feb 19 08:36:37 crc kubenswrapper[4780]: E0219 08:36:37.720375 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0\": container with ID starting with ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0 not found: ID does not exist" containerID="ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.720400 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0"} err="failed to get container status \"ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0\": rpc error: code = NotFound desc = could not find container \"ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0\": container with ID starting with ce53f41c3230ba56a0e2b77a61e893471f2c4cca94adae270a3e4472b40e12c0 not found: ID does not exist" Feb 19 08:36:37 crc kubenswrapper[4780]: I0219 08:36:37.953318 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" path="/var/lib/kubelet/pods/72b50492-adc4-475d-a97f-66f0841c56b4/volumes" Feb 19 08:36:39 crc kubenswrapper[4780]: I0219 08:36:39.222377 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:39 crc kubenswrapper[4780]: I0219 08:36:39.222426 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:39 crc kubenswrapper[4780]: I0219 08:36:39.261168 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:39 crc kubenswrapper[4780]: I0219 08:36:39.707900 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g4spf" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.717042 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl"] Feb 19 08:36:41 crc kubenswrapper[4780]: E0219 08:36:41.717547 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="extract-content" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.717561 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="extract-content" Feb 19 08:36:41 crc kubenswrapper[4780]: E0219 08:36:41.717584 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="registry-server" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.717592 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="registry-server" Feb 19 08:36:41 crc kubenswrapper[4780]: E0219 08:36:41.717607 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="extract-utilities" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.717613 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="extract-utilities" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.717712 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b50492-adc4-475d-a97f-66f0841c56b4" containerName="registry-server" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.718516 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.719817 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2kwsq" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.727145 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl"] Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.880220 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.880393 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.880512 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphk8\" (UniqueName: \"kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.981935 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.981981 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphk8\" (UniqueName: \"kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.982106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.983015 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:41 crc kubenswrapper[4780]: I0219 08:36:41.983255 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.002147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphk8\" (UniqueName: \"kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.053359 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.302567 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl"] Feb 19 08:36:42 crc kubenswrapper[4780]: W0219 08:36:42.308302 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14e7421b_1fd7_4d9d_995c_c2855cc56779.slice/crio-0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324 WatchSource:0}: Error finding container 0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324: Status 404 returned error can't find the container with id 0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324 Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.661441 4780 generic.go:334] "Generic (PLEG): container finished" podID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerID="8671f202c461f197efe02577ea7517955470337ecd7a375090b12873d9d45afb" exitCode=0 Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.661494 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" event={"ID":"14e7421b-1fd7-4d9d-995c-c2855cc56779","Type":"ContainerDied","Data":"8671f202c461f197efe02577ea7517955470337ecd7a375090b12873d9d45afb"} Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.661835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" event={"ID":"14e7421b-1fd7-4d9d-995c-c2855cc56779","Type":"ContainerStarted","Data":"0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324"} Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.695090 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.696940 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.710165 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.893229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.893302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.893683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqbs6\" (UniqueName: \"kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.994817 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.994875 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.994901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqbs6\" (UniqueName: \"kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.995477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:42 crc kubenswrapper[4780]: I0219 08:36:42.995621 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.013025 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqbs6\" (UniqueName: \"kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6\") pod \"redhat-marketplace-t6njc\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.015643 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.467826 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.668990 4780 generic.go:334] "Generic (PLEG): container finished" podID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerID="388830fbd6e1f6f117ffd3d701d0eef3c5553253c154c178f88626c547ee6bfa" exitCode=0 Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.669232 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerDied","Data":"388830fbd6e1f6f117ffd3d701d0eef3c5553253c154c178f88626c547ee6bfa"} Feb 19 08:36:43 crc kubenswrapper[4780]: I0219 08:36:43.669280 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerStarted","Data":"de377c73515ba424ae5298c653631eb8a685a701c4a58991f14c8cfb875ea34f"} Feb 19 08:36:44 crc kubenswrapper[4780]: I0219 08:36:44.682220 4780 generic.go:334] "Generic (PLEG): container finished" podID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerID="edc5d5ea214255cb47e3c320c843f554f5f7f2bc7a6cac8a4f6cc48f87239a0e" exitCode=0 Feb 19 08:36:44 crc kubenswrapper[4780]: I0219 08:36:44.682310 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" event={"ID":"14e7421b-1fd7-4d9d-995c-c2855cc56779","Type":"ContainerDied","Data":"edc5d5ea214255cb47e3c320c843f554f5f7f2bc7a6cac8a4f6cc48f87239a0e"} Feb 19 08:36:44 crc kubenswrapper[4780]: I0219 08:36:44.688418 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerStarted","Data":"8eb9d3148f43ab7638561800f6f6bd404c1e5143b78006b160a1c1e860deafa7"} Feb 19 08:36:45 crc kubenswrapper[4780]: I0219 08:36:45.695162 4780 generic.go:334] "Generic (PLEG): container finished" podID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerID="8eb9d3148f43ab7638561800f6f6bd404c1e5143b78006b160a1c1e860deafa7" exitCode=0 Feb 19 08:36:45 crc kubenswrapper[4780]: I0219 08:36:45.695299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerDied","Data":"8eb9d3148f43ab7638561800f6f6bd404c1e5143b78006b160a1c1e860deafa7"} Feb 19 08:36:45 crc kubenswrapper[4780]: I0219 08:36:45.698108 4780 generic.go:334] "Generic (PLEG): container finished" podID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerID="4492f7ee4dad6b92d42d034b6ce52a280b86dfa410ff25a7838ba46a1708f9a6" exitCode=0 Feb 19 08:36:45 crc kubenswrapper[4780]: I0219 08:36:45.698211 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" event={"ID":"14e7421b-1fd7-4d9d-995c-c2855cc56779","Type":"ContainerDied","Data":"4492f7ee4dad6b92d42d034b6ce52a280b86dfa410ff25a7838ba46a1708f9a6"} Feb 19 08:36:46 crc kubenswrapper[4780]: I0219 08:36:46.991458 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.045375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphk8\" (UniqueName: \"kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8\") pod \"14e7421b-1fd7-4d9d-995c-c2855cc56779\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.045430 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle\") pod \"14e7421b-1fd7-4d9d-995c-c2855cc56779\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.045467 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util\") pod \"14e7421b-1fd7-4d9d-995c-c2855cc56779\" (UID: \"14e7421b-1fd7-4d9d-995c-c2855cc56779\") " Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.046368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle" (OuterVolumeSpecName: "bundle") pod "14e7421b-1fd7-4d9d-995c-c2855cc56779" (UID: "14e7421b-1fd7-4d9d-995c-c2855cc56779"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.054219 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8" (OuterVolumeSpecName: "kube-api-access-mphk8") pod "14e7421b-1fd7-4d9d-995c-c2855cc56779" (UID: "14e7421b-1fd7-4d9d-995c-c2855cc56779"). InnerVolumeSpecName "kube-api-access-mphk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.062589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util" (OuterVolumeSpecName: "util") pod "14e7421b-1fd7-4d9d-995c-c2855cc56779" (UID: "14e7421b-1fd7-4d9d-995c-c2855cc56779"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.146432 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphk8\" (UniqueName: \"kubernetes.io/projected/14e7421b-1fd7-4d9d-995c-c2855cc56779-kube-api-access-mphk8\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.146830 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.146888 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14e7421b-1fd7-4d9d-995c-c2855cc56779-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.713848 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" event={"ID":"14e7421b-1fd7-4d9d-995c-c2855cc56779","Type":"ContainerDied","Data":"0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324"} Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.713893 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2da31678bcf33387192ef280c777e80c053fb30a9259e007cf186ac5bc3324" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.714381 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl" Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.716610 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerStarted","Data":"cbe191ec98f10ed39f7bb2beafaaf4638340832998d1694e209deb3b509d3bba"} Feb 19 08:36:47 crc kubenswrapper[4780]: I0219 08:36:47.750578 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t6njc" podStartSLOduration=2.370422944 podStartE2EDuration="5.750553083s" podCreationTimestamp="2026-02-19 08:36:42 +0000 UTC" firstStartedPulling="2026-02-19 08:36:43.670547407 +0000 UTC m=+946.414204876" lastFinishedPulling="2026-02-19 08:36:47.050677576 +0000 UTC m=+949.794335015" observedRunningTime="2026-02-19 08:36:47.735204204 +0000 UTC m=+950.478861663" watchObservedRunningTime="2026-02-19 08:36:47.750553083 +0000 UTC m=+950.494210532" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.484105 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:36:50 crc kubenswrapper[4780]: E0219 08:36:50.484815 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="pull" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.484836 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="pull" Feb 19 08:36:50 crc kubenswrapper[4780]: E0219 08:36:50.484855 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="extract" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.484864 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="extract" Feb 19 08:36:50 crc kubenswrapper[4780]: E0219 08:36:50.484882 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="util" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.484891 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="util" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.485057 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e7421b-1fd7-4d9d-995c-c2855cc56779" containerName="extract" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.486089 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.493869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.498179 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.594708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.594781 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.594814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2vf\" (UniqueName: \"kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.595363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.696211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.696321 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2vf\" (UniqueName: \"kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.697018 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.724026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2vf\" (UniqueName: \"kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf\") pod \"community-operators-xp6x2\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:50 crc kubenswrapper[4780]: I0219 08:36:50.805490 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:36:51 crc kubenswrapper[4780]: I0219 08:36:51.247357 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:36:51 crc kubenswrapper[4780]: W0219 08:36:51.254671 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a752477_c7ff_4bf3_a7e7_1d95ce10cbb9.slice/crio-ecfe35ac354ce08eccc102e967eb632656854c51dc70ac69e7ef47ea606add06 WatchSource:0}: Error finding container ecfe35ac354ce08eccc102e967eb632656854c51dc70ac69e7ef47ea606add06: Status 404 returned error can't find the container with id ecfe35ac354ce08eccc102e967eb632656854c51dc70ac69e7ef47ea606add06 Feb 19 08:36:51 crc kubenswrapper[4780]: I0219 08:36:51.745253 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerID="589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52" exitCode=0 Feb 19 08:36:51 crc kubenswrapper[4780]: I0219 08:36:51.745289 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerDied","Data":"589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52"} Feb 19 08:36:51 crc kubenswrapper[4780]: I0219 08:36:51.745323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerStarted","Data":"ecfe35ac354ce08eccc102e967eb632656854c51dc70ac69e7ef47ea606add06"} Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.056314 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm"] Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.056991 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.071413 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sg7gt" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.093041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm"] Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.214596 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrfb\" (UniqueName: \"kubernetes.io/projected/e6727151-55c9-47ba-b54e-45938c21180a-kube-api-access-fhrfb\") pod \"openstack-operator-controller-init-6679bf9b57-gbqtm\" (UID: \"e6727151-55c9-47ba-b54e-45938c21180a\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.315956 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrfb\" (UniqueName: \"kubernetes.io/projected/e6727151-55c9-47ba-b54e-45938c21180a-kube-api-access-fhrfb\") pod \"openstack-operator-controller-init-6679bf9b57-gbqtm\" (UID: \"e6727151-55c9-47ba-b54e-45938c21180a\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.339320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrfb\" (UniqueName: \"kubernetes.io/projected/e6727151-55c9-47ba-b54e-45938c21180a-kube-api-access-fhrfb\") pod \"openstack-operator-controller-init-6679bf9b57-gbqtm\" (UID: \"e6727151-55c9-47ba-b54e-45938c21180a\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.375703 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.685988 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm"] Feb 19 08:36:52 crc kubenswrapper[4780]: W0219 08:36:52.750966 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6727151_55c9_47ba_b54e_45938c21180a.slice/crio-c043e608969bd98ddcff895b2b3df4ee3f228ec3f553e3d235918ba6dbbea13b WatchSource:0}: Error finding container c043e608969bd98ddcff895b2b3df4ee3f228ec3f553e3d235918ba6dbbea13b: Status 404 returned error can't find the container with id c043e608969bd98ddcff895b2b3df4ee3f228ec3f553e3d235918ba6dbbea13b Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.753399 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerID="388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0" exitCode=0 Feb 19 08:36:52 crc kubenswrapper[4780]: I0219 08:36:52.753431 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerDied","Data":"388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0"} Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.016224 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.016308 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.071168 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.764888 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" event={"ID":"e6727151-55c9-47ba-b54e-45938c21180a","Type":"ContainerStarted","Data":"c043e608969bd98ddcff895b2b3df4ee3f228ec3f553e3d235918ba6dbbea13b"} Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.770720 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerStarted","Data":"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee"} Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.789497 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xp6x2" podStartSLOduration=2.350421603 podStartE2EDuration="3.789478404s" podCreationTimestamp="2026-02-19 08:36:50 +0000 UTC" firstStartedPulling="2026-02-19 08:36:51.746451073 +0000 UTC m=+954.490108522" lastFinishedPulling="2026-02-19 08:36:53.185507874 +0000 UTC m=+955.929165323" observedRunningTime="2026-02-19 08:36:53.784253715 +0000 UTC m=+956.527911164" watchObservedRunningTime="2026-02-19 08:36:53.789478404 +0000 UTC m=+956.533135843" Feb 19 08:36:53 crc kubenswrapper[4780]: I0219 08:36:53.812585 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:56 crc kubenswrapper[4780]: I0219 08:36:56.675004 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:56 crc kubenswrapper[4780]: I0219 08:36:56.675695 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t6njc" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="registry-server" containerID="cri-o://cbe191ec98f10ed39f7bb2beafaaf4638340832998d1694e209deb3b509d3bba" gracePeriod=2 Feb 19 08:36:56 crc kubenswrapper[4780]: I0219 08:36:56.798221 4780 generic.go:334] "Generic (PLEG): container finished" podID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerID="cbe191ec98f10ed39f7bb2beafaaf4638340832998d1694e209deb3b509d3bba" exitCode=0 Feb 19 08:36:56 crc kubenswrapper[4780]: I0219 08:36:56.798261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerDied","Data":"cbe191ec98f10ed39f7bb2beafaaf4638340832998d1694e209deb3b509d3bba"} Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.094617 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.285809 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqbs6\" (UniqueName: \"kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6\") pod \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.286201 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content\") pod \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.286232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities\") pod \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\" (UID: \"5bb72a4a-7a8a-4213-9b71-4675e47bcaba\") " Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.287133 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities" (OuterVolumeSpecName: "utilities") pod "5bb72a4a-7a8a-4213-9b71-4675e47bcaba" (UID: "5bb72a4a-7a8a-4213-9b71-4675e47bcaba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.290949 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6" (OuterVolumeSpecName: "kube-api-access-gqbs6") pod "5bb72a4a-7a8a-4213-9b71-4675e47bcaba" (UID: "5bb72a4a-7a8a-4213-9b71-4675e47bcaba"). InnerVolumeSpecName "kube-api-access-gqbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.308822 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bb72a4a-7a8a-4213-9b71-4675e47bcaba" (UID: "5bb72a4a-7a8a-4213-9b71-4675e47bcaba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.390963 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqbs6\" (UniqueName: \"kubernetes.io/projected/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-kube-api-access-gqbs6\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.391023 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.391036 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb72a4a-7a8a-4213-9b71-4675e47bcaba-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.806963 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6njc" event={"ID":"5bb72a4a-7a8a-4213-9b71-4675e47bcaba","Type":"ContainerDied","Data":"de377c73515ba424ae5298c653631eb8a685a701c4a58991f14c8cfb875ea34f"} Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.807042 4780 scope.go:117] "RemoveContainer" containerID="cbe191ec98f10ed39f7bb2beafaaf4638340832998d1694e209deb3b509d3bba" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.807042 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6njc" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.814954 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" event={"ID":"e6727151-55c9-47ba-b54e-45938c21180a","Type":"ContainerStarted","Data":"c2f17b037041da96e030e1a23bcaac644d9c7c9f18cb67edc931cdcc51ea24fe"} Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.815102 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.835139 4780 scope.go:117] "RemoveContainer" containerID="8eb9d3148f43ab7638561800f6f6bd404c1e5143b78006b160a1c1e860deafa7" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.855618 4780 scope.go:117] "RemoveContainer" containerID="388830fbd6e1f6f117ffd3d701d0eef3c5553253c154c178f88626c547ee6bfa" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.897833 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" podStartSLOduration=1.731408711 podStartE2EDuration="5.897817618s" podCreationTimestamp="2026-02-19 08:36:52 +0000 UTC" firstStartedPulling="2026-02-19 08:36:52.75286534 +0000 UTC m=+955.496522789" lastFinishedPulling="2026-02-19 08:36:56.919274247 +0000 UTC m=+959.662931696" observedRunningTime="2026-02-19 08:36:57.868625088 +0000 UTC m=+960.612282537" watchObservedRunningTime="2026-02-19 08:36:57.897817618 +0000 UTC m=+960.641475067" Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.899419 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.904364 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6njc"] Feb 19 08:36:57 crc kubenswrapper[4780]: I0219 08:36:57.953970 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" path="/var/lib/kubelet/pods/5bb72a4a-7a8a-4213-9b71-4675e47bcaba/volumes" Feb 19 08:37:00 crc kubenswrapper[4780]: I0219 08:37:00.805621 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:00 crc kubenswrapper[4780]: I0219 08:37:00.806256 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:00 crc kubenswrapper[4780]: I0219 08:37:00.855309 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:00 crc kubenswrapper[4780]: I0219 08:37:00.901577 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:02 crc kubenswrapper[4780]: I0219 08:37:02.379595 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-gbqtm" Feb 19 08:37:03 crc kubenswrapper[4780]: I0219 08:37:03.675607 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:37:03 crc kubenswrapper[4780]: I0219 08:37:03.676238 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xp6x2" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="registry-server" containerID="cri-o://cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee" gracePeriod=2 Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.685246 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.785040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities\") pod \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.785112 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content\") pod \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.785211 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2vf\" (UniqueName: \"kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf\") pod \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\" (UID: \"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9\") " Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.785975 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities" (OuterVolumeSpecName: "utilities") pod "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" (UID: "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.790263 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf" (OuterVolumeSpecName: "kube-api-access-ll2vf") pod "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" (UID: "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9"). InnerVolumeSpecName "kube-api-access-ll2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.833162 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" (UID: "9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.858628 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerID="cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee" exitCode=0 Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.858680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerDied","Data":"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee"} Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.858710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xp6x2" event={"ID":"9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9","Type":"ContainerDied","Data":"ecfe35ac354ce08eccc102e967eb632656854c51dc70ac69e7ef47ea606add06"} Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.858728 4780 scope.go:117] "RemoveContainer" containerID="cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.858680 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xp6x2" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.876389 4780 scope.go:117] "RemoveContainer" containerID="388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.885944 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.886665 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2vf\" (UniqueName: \"kubernetes.io/projected/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-kube-api-access-ll2vf\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.886701 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.886713 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.898836 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xp6x2"] Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.909590 4780 scope.go:117] "RemoveContainer" containerID="589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.924255 4780 scope.go:117] "RemoveContainer" containerID="cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee" Feb 19 08:37:04 crc kubenswrapper[4780]: E0219 08:37:04.925296 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee\": container with ID starting with cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee not found: ID does not exist" containerID="cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.925330 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee"} err="failed to get container status \"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee\": rpc error: code = NotFound desc = could not find container \"cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee\": container with ID starting with cc69985617a1418825413244f684760e117d8be10c8d7ef470855b13c1b0bcee not found: ID does not exist" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.925371 4780 scope.go:117] "RemoveContainer" containerID="388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0" Feb 19 08:37:04 crc kubenswrapper[4780]: E0219 08:37:04.925867 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0\": container with ID starting with 388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0 not found: ID does not exist" containerID="388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.925906 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0"} err="failed to get container status \"388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0\": rpc error: code = NotFound desc = could not find container \"388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0\": container with ID starting with 388b586ac480926201c03b362f07a86070dcf7a0babea40e1cde292d2695b2f0 not found: ID does not exist" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.925934 4780 scope.go:117] "RemoveContainer" containerID="589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52" Feb 19 08:37:04 crc kubenswrapper[4780]: E0219 08:37:04.926265 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52\": container with ID starting with 589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52 not found: ID does not exist" containerID="589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52" Feb 19 08:37:04 crc kubenswrapper[4780]: I0219 08:37:04.926303 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52"} err="failed to get container status \"589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52\": rpc error: code = NotFound desc = could not find container \"589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52\": container with ID starting with 589c8d087d406d12890027fe87151fa98368643705879e70a694a3e5788a2a52 not found: ID does not exist" Feb 19 08:37:05 crc kubenswrapper[4780]: I0219 08:37:05.947707 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" path="/var/lib/kubelet/pods/9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9/volumes" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.134734 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc"] Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136668 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="extract-utilities" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136721 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="extract-utilities" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136743 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136754 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136768 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="extract-content" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136775 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="extract-content" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136787 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="extract-utilities" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136795 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="extract-utilities" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136812 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136820 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.136840 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="extract-content" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.136848 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="extract-content" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.137697 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb72a4a-7a8a-4213-9b71-4675e47bcaba" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.137733 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a752477-c7ff-4bf3-a7e7-1d95ce10cbb9" containerName="registry-server" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.139451 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.143312 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pfp2f" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.147351 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.153859 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.154835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.158969 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mgdj2" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.164260 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.169644 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.172104 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ls2qq" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.180772 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.181762 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.186531 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9p5nf" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.193435 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.209416 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.235187 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.236052 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.242188 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.242978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.243823 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j7hss" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.248002 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jtd4n" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.256441 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bctgj"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.257250 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.259745 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cjdrd" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.260035 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.260150 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.271681 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.279085 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.291213 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bctgj"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302194 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302460 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwfz\" (UniqueName: \"kubernetes.io/projected/7fa9e6d3-35ef-4e34-908f-709a5e3980b3-kube-api-access-zvwfz\") pod \"glance-operator-controller-manager-77987464f4-pqwdn\" (UID: \"7fa9e6d3-35ef-4e34-908f-709a5e3980b3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302507 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fmm\" (UniqueName: \"kubernetes.io/projected/0bd45130-dc60-4a0b-882d-10f9fbb742d2-kube-api-access-h4fmm\") pod \"designate-operator-controller-manager-6d8bf5c495-bw4n6\" (UID: \"0bd45130-dc60-4a0b-882d-10f9fbb742d2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302575 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcm2s\" (UniqueName: \"kubernetes.io/projected/2153b5f8-a977-41b6-a736-659e1a71cb99-kube-api-access-jcm2s\") pod \"barbican-operator-controller-manager-868647ff47-gvbzc\" (UID: \"2153b5f8-a977-41b6-a736-659e1a71cb99\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mnd\" (UniqueName: \"kubernetes.io/projected/4500f812-fa02-4888-8c0c-0627f7bbccf9-kube-api-access-c6mnd\") pod \"cinder-operator-controller-manager-5d946d989d-pbw7v\" (UID: \"4500f812-fa02-4888-8c0c-0627f7bbccf9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.302950 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.306678 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ng84q" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.307156 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.319178 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.320099 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.322916 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8frsv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.328332 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.329157 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.333508 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.334386 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.336543 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5hmxt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.336644 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rfwbb" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.345749 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.353559 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.354302 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.357184 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.368202 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.370811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.371675 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-c66kd" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.377514 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.378786 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.381990 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-f9tjp" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.386414 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.402213 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.402299 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404171 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcm2s\" (UniqueName: \"kubernetes.io/projected/2153b5f8-a977-41b6-a736-659e1a71cb99-kube-api-access-jcm2s\") pod \"barbican-operator-controller-manager-868647ff47-gvbzc\" (UID: \"2153b5f8-a977-41b6-a736-659e1a71cb99\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404184 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404193 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mnd\" (UniqueName: \"kubernetes.io/projected/4500f812-fa02-4888-8c0c-0627f7bbccf9-kube-api-access-c6mnd\") pod \"cinder-operator-controller-manager-5d946d989d-pbw7v\" (UID: \"4500f812-fa02-4888-8c0c-0627f7bbccf9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404301 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwm8\" (UniqueName: \"kubernetes.io/projected/53377a47-fc5a-452e-84ca-235e1d71311c-kube-api-access-7vwm8\") pod \"ironic-operator-controller-manager-554564d7fc-glkqc\" (UID: \"53377a47-fc5a-452e-84ca-235e1d71311c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404343 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m4t\" (UniqueName: \"kubernetes.io/projected/49bbe48e-3c79-422c-a85b-15198ec1a88f-kube-api-access-l7m4t\") pod \"heat-operator-controller-manager-69f49c598c-ln8qt\" (UID: \"49bbe48e-3c79-422c-a85b-15198ec1a88f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404411 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlqh\" (UniqueName: \"kubernetes.io/projected/4118293d-1deb-4ce3-92e8-6055d0bc5000-kube-api-access-8dlqh\") pod \"horizon-operator-controller-manager-5b9b8895d5-kcn42\" (UID: \"4118293d-1deb-4ce3-92e8-6055d0bc5000\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwfz\" (UniqueName: \"kubernetes.io/projected/7fa9e6d3-35ef-4e34-908f-709a5e3980b3-kube-api-access-zvwfz\") pod \"glance-operator-controller-manager-77987464f4-pqwdn\" (UID: \"7fa9e6d3-35ef-4e34-908f-709a5e3980b3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404533 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fmm\" (UniqueName: \"kubernetes.io/projected/0bd45130-dc60-4a0b-882d-10f9fbb742d2-kube-api-access-h4fmm\") pod \"designate-operator-controller-manager-6d8bf5c495-bw4n6\" (UID: \"0bd45130-dc60-4a0b-882d-10f9fbb742d2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.404579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nm97\" (UniqueName: \"kubernetes.io/projected/a2874085-9630-45db-aaa1-2e01dd53d11f-kube-api-access-4nm97\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.406269 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mx8hp" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.424713 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.425431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.434983 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwfz\" (UniqueName: \"kubernetes.io/projected/7fa9e6d3-35ef-4e34-908f-709a5e3980b3-kube-api-access-zvwfz\") pod \"glance-operator-controller-manager-77987464f4-pqwdn\" (UID: \"7fa9e6d3-35ef-4e34-908f-709a5e3980b3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.435259 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kq95x" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.435611 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.435907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcm2s\" (UniqueName: \"kubernetes.io/projected/2153b5f8-a977-41b6-a736-659e1a71cb99-kube-api-access-jcm2s\") pod \"barbican-operator-controller-manager-868647ff47-gvbzc\" (UID: \"2153b5f8-a977-41b6-a736-659e1a71cb99\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.438206 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fmm\" (UniqueName: \"kubernetes.io/projected/0bd45130-dc60-4a0b-882d-10f9fbb742d2-kube-api-access-h4fmm\") pod \"designate-operator-controller-manager-6d8bf5c495-bw4n6\" (UID: \"0bd45130-dc60-4a0b-882d-10f9fbb742d2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.442012 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.443667 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.453807 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-srvlq" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.455310 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mnd\" (UniqueName: \"kubernetes.io/projected/4500f812-fa02-4888-8c0c-0627f7bbccf9-kube-api-access-c6mnd\") pod \"cinder-operator-controller-manager-5d946d989d-pbw7v\" (UID: \"4500f812-fa02-4888-8c0c-0627f7bbccf9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.467904 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.469993 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.474113 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.475785 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j6rhn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.485781 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.492862 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.498784 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dvb\" (UniqueName: \"kubernetes.io/projected/ad05a5f1-785e-4342-856b-e717d51e36bc-kube-api-access-v9dvb\") pod \"keystone-operator-controller-manager-b4d948c87-474lg\" (UID: \"ad05a5f1-785e-4342-856b-e717d51e36bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506294 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbz6m\" (UniqueName: \"kubernetes.io/projected/36d2971c-bf26-4327-9144-f5faa7490b05-kube-api-access-cbz6m\") pod \"manila-operator-controller-manager-54f6768c69-b6vcv\" (UID: \"36d2971c-bf26-4327-9144-f5faa7490b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgh5q\" (UniqueName: \"kubernetes.io/projected/ee0ca95b-15e2-4d79-84d1-8600d083dbb0-kube-api-access-vgh5q\") pod \"mariadb-operator-controller-manager-6994f66f48-pvff4\" (UID: \"ee0ca95b-15e2-4d79-84d1-8600d083dbb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506343 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9q7\" (UniqueName: \"kubernetes.io/projected/1d22904f-de9c-407e-9757-72c0eca19ea1-kube-api-access-zr9q7\") pod \"octavia-operator-controller-manager-69f8888797-dk2pj\" (UID: \"1d22904f-de9c-407e-9757-72c0eca19ea1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nm97\" (UniqueName: \"kubernetes.io/projected/a2874085-9630-45db-aaa1-2e01dd53d11f-kube-api-access-4nm97\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506410 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwm8\" (UniqueName: \"kubernetes.io/projected/53377a47-fc5a-452e-84ca-235e1d71311c-kube-api-access-7vwm8\") pod \"ironic-operator-controller-manager-554564d7fc-glkqc\" (UID: \"53377a47-fc5a-452e-84ca-235e1d71311c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7m4t\" (UniqueName: \"kubernetes.io/projected/49bbe48e-3c79-422c-a85b-15198ec1a88f-kube-api-access-l7m4t\") pod \"heat-operator-controller-manager-69f49c598c-ln8qt\" (UID: \"49bbe48e-3c79-422c-a85b-15198ec1a88f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506441 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlqh\" (UniqueName: \"kubernetes.io/projected/4118293d-1deb-4ce3-92e8-6055d0bc5000-kube-api-access-8dlqh\") pod \"horizon-operator-controller-manager-5b9b8895d5-kcn42\" (UID: \"4118293d-1deb-4ce3-92e8-6055d0bc5000\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506457 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5xsm\" (UniqueName: \"kubernetes.io/projected/2744300e-54a9-4fba-88a0-fe6741f88116-kube-api-access-g5xsm\") pod \"neutron-operator-controller-manager-64ddbf8bb-rxkld\" (UID: \"2744300e-54a9-4fba-88a0-fe6741f88116\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.506504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfmn\" (UniqueName: \"kubernetes.io/projected/efb49c0a-bd0d-4ad4-befd-1e4a645afcc0-kube-api-access-mxfmn\") pod \"nova-operator-controller-manager-567668f5cf-jmvjt\" (UID: \"efb49c0a-bd0d-4ad4-befd-1e4a645afcc0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.507149 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.507188 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert podName:a2874085-9630-45db-aaa1-2e01dd53d11f nodeName:}" failed. No retries permitted until 2026-02-19 08:37:22.007174664 +0000 UTC m=+984.750832113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert") pod "infra-operator-controller-manager-79d975b745-bctgj" (UID: "a2874085-9630-45db-aaa1-2e01dd53d11f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.512171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.528515 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.531148 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.531945 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.533720 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-86f64" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.534679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwm8\" (UniqueName: \"kubernetes.io/projected/53377a47-fc5a-452e-84ca-235e1d71311c-kube-api-access-7vwm8\") pod \"ironic-operator-controller-manager-554564d7fc-glkqc\" (UID: \"53377a47-fc5a-452e-84ca-235e1d71311c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.539947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7m4t\" (UniqueName: \"kubernetes.io/projected/49bbe48e-3c79-422c-a85b-15198ec1a88f-kube-api-access-l7m4t\") pod \"heat-operator-controller-manager-69f49c598c-ln8qt\" (UID: \"49bbe48e-3c79-422c-a85b-15198ec1a88f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.544525 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.548002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlqh\" (UniqueName: \"kubernetes.io/projected/4118293d-1deb-4ce3-92e8-6055d0bc5000-kube-api-access-8dlqh\") pod \"horizon-operator-controller-manager-5b9b8895d5-kcn42\" (UID: \"4118293d-1deb-4ce3-92e8-6055d0bc5000\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.548206 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nm97\" (UniqueName: \"kubernetes.io/projected/a2874085-9630-45db-aaa1-2e01dd53d11f-kube-api-access-4nm97\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.561300 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.563291 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.591168 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.595988 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.597306 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607393 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgh5q\" (UniqueName: \"kubernetes.io/projected/ee0ca95b-15e2-4d79-84d1-8600d083dbb0-kube-api-access-vgh5q\") pod \"mariadb-operator-controller-manager-6994f66f48-pvff4\" (UID: \"ee0ca95b-15e2-4d79-84d1-8600d083dbb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607435 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr9q7\" (UniqueName: \"kubernetes.io/projected/1d22904f-de9c-407e-9757-72c0eca19ea1-kube-api-access-zr9q7\") pod \"octavia-operator-controller-manager-69f8888797-dk2pj\" (UID: \"1d22904f-de9c-407e-9757-72c0eca19ea1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607476 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28xf\" (UniqueName: \"kubernetes.io/projected/506816c7-86de-45c3-800d-96fe50b629f1-kube-api-access-t28xf\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607529 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5xsm\" (UniqueName: \"kubernetes.io/projected/2744300e-54a9-4fba-88a0-fe6741f88116-kube-api-access-g5xsm\") pod \"neutron-operator-controller-manager-64ddbf8bb-rxkld\" (UID: \"2744300e-54a9-4fba-88a0-fe6741f88116\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rz48\" (UniqueName: \"kubernetes.io/projected/5c421e2a-bf97-429f-9cb1-8945c54d4927-kube-api-access-7rz48\") pod \"ovn-operator-controller-manager-d44cf6b75-cc8zw\" (UID: \"5c421e2a-bf97-429f-9cb1-8945c54d4927\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607587 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxfmn\" (UniqueName: \"kubernetes.io/projected/efb49c0a-bd0d-4ad4-befd-1e4a645afcc0-kube-api-access-mxfmn\") pod \"nova-operator-controller-manager-567668f5cf-jmvjt\" (UID: \"efb49c0a-bd0d-4ad4-befd-1e4a645afcc0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607607 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbgmz\" (UniqueName: \"kubernetes.io/projected/a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2-kube-api-access-tbgmz\") pod \"placement-operator-controller-manager-8497b45c89-lbtq7\" (UID: \"a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607639 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dvb\" (UniqueName: \"kubernetes.io/projected/ad05a5f1-785e-4342-856b-e717d51e36bc-kube-api-access-v9dvb\") pod \"keystone-operator-controller-manager-b4d948c87-474lg\" (UID: \"ad05a5f1-785e-4342-856b-e717d51e36bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.607657 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbz6m\" (UniqueName: \"kubernetes.io/projected/36d2971c-bf26-4327-9144-f5faa7490b05-kube-api-access-cbz6m\") pod \"manila-operator-controller-manager-54f6768c69-b6vcv\" (UID: \"36d2971c-bf26-4327-9144-f5faa7490b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.610850 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6j4vl" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.626479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.627400 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbz6m\" (UniqueName: \"kubernetes.io/projected/36d2971c-bf26-4327-9144-f5faa7490b05-kube-api-access-cbz6m\") pod \"manila-operator-controller-manager-54f6768c69-b6vcv\" (UID: \"36d2971c-bf26-4327-9144-f5faa7490b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.642384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr9q7\" (UniqueName: \"kubernetes.io/projected/1d22904f-de9c-407e-9757-72c0eca19ea1-kube-api-access-zr9q7\") pod \"octavia-operator-controller-manager-69f8888797-dk2pj\" (UID: \"1d22904f-de9c-407e-9757-72c0eca19ea1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.645952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgh5q\" (UniqueName: \"kubernetes.io/projected/ee0ca95b-15e2-4d79-84d1-8600d083dbb0-kube-api-access-vgh5q\") pod \"mariadb-operator-controller-manager-6994f66f48-pvff4\" (UID: \"ee0ca95b-15e2-4d79-84d1-8600d083dbb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.646281 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.648488 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dvb\" (UniqueName: \"kubernetes.io/projected/ad05a5f1-785e-4342-856b-e717d51e36bc-kube-api-access-v9dvb\") pod \"keystone-operator-controller-manager-b4d948c87-474lg\" (UID: \"ad05a5f1-785e-4342-856b-e717d51e36bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.648802 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxfmn\" (UniqueName: \"kubernetes.io/projected/efb49c0a-bd0d-4ad4-befd-1e4a645afcc0-kube-api-access-mxfmn\") pod \"nova-operator-controller-manager-567668f5cf-jmvjt\" (UID: \"efb49c0a-bd0d-4ad4-befd-1e4a645afcc0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.650570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5xsm\" (UniqueName: \"kubernetes.io/projected/2744300e-54a9-4fba-88a0-fe6741f88116-kube-api-access-g5xsm\") pod \"neutron-operator-controller-manager-64ddbf8bb-rxkld\" (UID: \"2744300e-54a9-4fba-88a0-fe6741f88116\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.656969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.672604 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.688625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.699292 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gwz8w"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.700251 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.701560 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gwz8w"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.704263 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.704793 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-c7kld" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708597 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rz48\" (UniqueName: \"kubernetes.io/projected/5c421e2a-bf97-429f-9cb1-8945c54d4927-kube-api-access-7rz48\") pod \"ovn-operator-controller-manager-d44cf6b75-cc8zw\" (UID: \"5c421e2a-bf97-429f-9cb1-8945c54d4927\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708645 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708681 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbgmz\" (UniqueName: \"kubernetes.io/projected/a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2-kube-api-access-tbgmz\") pod \"placement-operator-controller-manager-8497b45c89-lbtq7\" (UID: \"a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708722 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7n7b\" (UniqueName: \"kubernetes.io/projected/7a8f8f4b-597a-4eb7-b416-81ae3f73e306-kube-api-access-z7n7b\") pod \"swift-operator-controller-manager-68f46476f-rmrs9\" (UID: \"7a8f8f4b-597a-4eb7-b416-81ae3f73e306\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582gq\" (UniqueName: \"kubernetes.io/projected/97826b66-db76-40d7-a06f-cb6f55739cc9-kube-api-access-582gq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6dr7r\" (UID: \"97826b66-db76-40d7-a06f-cb6f55739cc9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.708815 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28xf\" (UniqueName: \"kubernetes.io/projected/506816c7-86de-45c3-800d-96fe50b629f1-kube-api-access-t28xf\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.709075 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:21 crc kubenswrapper[4780]: E0219 08:37:21.709144 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert podName:506816c7-86de-45c3-800d-96fe50b629f1 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:22.209112434 +0000 UTC m=+984.952769893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" (UID: "506816c7-86de-45c3-800d-96fe50b629f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.716411 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.724083 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.724958 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.731049 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.737418 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-l4hkk" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.743756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbgmz\" (UniqueName: \"kubernetes.io/projected/a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2-kube-api-access-tbgmz\") pod \"placement-operator-controller-manager-8497b45c89-lbtq7\" (UID: \"a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.754100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rz48\" (UniqueName: \"kubernetes.io/projected/5c421e2a-bf97-429f-9cb1-8945c54d4927-kube-api-access-7rz48\") pod \"ovn-operator-controller-manager-d44cf6b75-cc8zw\" (UID: \"5c421e2a-bf97-429f-9cb1-8945c54d4927\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.761375 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.761948 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28xf\" (UniqueName: \"kubernetes.io/projected/506816c7-86de-45c3-800d-96fe50b629f1-kube-api-access-t28xf\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.810507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7n7b\" (UniqueName: \"kubernetes.io/projected/7a8f8f4b-597a-4eb7-b416-81ae3f73e306-kube-api-access-z7n7b\") pod \"swift-operator-controller-manager-68f46476f-rmrs9\" (UID: \"7a8f8f4b-597a-4eb7-b416-81ae3f73e306\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.810581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582gq\" (UniqueName: \"kubernetes.io/projected/97826b66-db76-40d7-a06f-cb6f55739cc9-kube-api-access-582gq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6dr7r\" (UID: \"97826b66-db76-40d7-a06f-cb6f55739cc9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.810622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhjf\" (UniqueName: \"kubernetes.io/projected/e6ba6725-19ae-4588-9ff1-a9830487fa82-kube-api-access-6nhjf\") pod \"test-operator-controller-manager-7866795846-gwz8w\" (UID: \"e6ba6725-19ae-4588-9ff1-a9830487fa82\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.832496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582gq\" (UniqueName: \"kubernetes.io/projected/97826b66-db76-40d7-a06f-cb6f55739cc9-kube-api-access-582gq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6dr7r\" (UID: \"97826b66-db76-40d7-a06f-cb6f55739cc9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.847920 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7n7b\" (UniqueName: \"kubernetes.io/projected/7a8f8f4b-597a-4eb7-b416-81ae3f73e306-kube-api-access-z7n7b\") pod \"swift-operator-controller-manager-68f46476f-rmrs9\" (UID: \"7a8f8f4b-597a-4eb7-b416-81ae3f73e306\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.865191 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.866098 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.868417 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6xkzk" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.868733 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.868841 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.889875 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.900531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.912408 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfmt\" (UniqueName: \"kubernetes.io/projected/629b235e-a906-442c-b653-c829f6f4e4bd-kube-api-access-wjfmt\") pod \"watcher-operator-controller-manager-5db88f68c-czsms\" (UID: \"629b235e-a906-442c-b653-c829f6f4e4bd\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.912477 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhjf\" (UniqueName: \"kubernetes.io/projected/e6ba6725-19ae-4588-9ff1-a9830487fa82-kube-api-access-6nhjf\") pod \"test-operator-controller-manager-7866795846-gwz8w\" (UID: \"e6ba6725-19ae-4588-9ff1-a9830487fa82\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.927423 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.943647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhjf\" (UniqueName: \"kubernetes.io/projected/e6ba6725-19ae-4588-9ff1-a9830487fa82-kube-api-access-6nhjf\") pod \"test-operator-controller-manager-7866795846-gwz8w\" (UID: \"e6ba6725-19ae-4588-9ff1-a9830487fa82\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.943953 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.958378 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.977879 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb"] Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.978976 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" Feb 19 08:37:21 crc kubenswrapper[4780]: I0219 08:37:21.985997 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kdt69" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.000881 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.014451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.014515 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.014552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjfmt\" (UniqueName: \"kubernetes.io/projected/629b235e-a906-442c-b653-c829f6f4e4bd-kube-api-access-wjfmt\") pod \"watcher-operator-controller-manager-5db88f68c-czsms\" (UID: \"629b235e-a906-442c-b653-c829f6f4e4bd\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.014578 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxknh\" (UniqueName: \"kubernetes.io/projected/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-kube-api-access-gxknh\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.014615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.014738 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.014777 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert podName:a2874085-9630-45db-aaa1-2e01dd53d11f nodeName:}" failed. No retries permitted until 2026-02-19 08:37:23.014763987 +0000 UTC m=+985.758421436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert") pod "infra-operator-controller-manager-79d975b745-bctgj" (UID: "a2874085-9630-45db-aaa1-2e01dd53d11f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.042787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjfmt\" (UniqueName: \"kubernetes.io/projected/629b235e-a906-442c-b653-c829f6f4e4bd-kube-api-access-wjfmt\") pod \"watcher-operator-controller-manager-5db88f68c-czsms\" (UID: \"629b235e-a906-442c-b653-c829f6f4e4bd\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.115860 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.115931 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxknh\" (UniqueName: \"kubernetes.io/projected/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-kube-api-access-gxknh\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.116101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.116188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8bk\" (UniqueName: \"kubernetes.io/projected/53b4b555-856b-4db0-b8e5-de61ff768cc6-kube-api-access-xb8bk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9lssb\" (UID: \"53b4b555-856b-4db0-b8e5-de61ff768cc6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.116349 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.116391 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:22.616375758 +0000 UTC m=+985.360033207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.117295 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.117364 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:22.617346062 +0000 UTC m=+985.361003511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.132690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxknh\" (UniqueName: \"kubernetes.io/projected/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-kube-api-access-gxknh\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.159907 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.181906 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.218090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8bk\" (UniqueName: \"kubernetes.io/projected/53b4b555-856b-4db0-b8e5-de61ff768cc6-kube-api-access-xb8bk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9lssb\" (UID: \"53b4b555-856b-4db0-b8e5-de61ff768cc6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.218188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.218478 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.218569 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert podName:506816c7-86de-45c3-800d-96fe50b629f1 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:23.218544883 +0000 UTC m=+985.962202332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" (UID: "506816c7-86de-45c3-800d-96fe50b629f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.242466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8bk\" (UniqueName: \"kubernetes.io/projected/53b4b555-856b-4db0-b8e5-de61ff768cc6-kube-api-access-xb8bk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9lssb\" (UID: \"53b4b555-856b-4db0-b8e5-de61ff768cc6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.257685 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.266386 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.280644 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.320806 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.408253 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.605267 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.605319 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg"] Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.607664 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53377a47_fc5a_452e_84ca_235e1d71311c.slice/crio-5edaa2e48dd2e9cf96c9a92ee9d6cf4b9d8ecde8aef6e5f29e385aacd8f201a8 WatchSource:0}: Error finding container 5edaa2e48dd2e9cf96c9a92ee9d6cf4b9d8ecde8aef6e5f29e385aacd8f201a8: Status 404 returned error can't find the container with id 5edaa2e48dd2e9cf96c9a92ee9d6cf4b9d8ecde8aef6e5f29e385aacd8f201a8 Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.608070 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad05a5f1_785e_4342_856b_e717d51e36bc.slice/crio-8a282ab4444b046ac618ad1475abac4878406c732466eb255576f99c61aaf31a WatchSource:0}: Error finding container 8a282ab4444b046ac618ad1475abac4878406c732466eb255576f99c61aaf31a: Status 404 returned error can't find the container with id 8a282ab4444b046ac618ad1475abac4878406c732466eb255576f99c61aaf31a Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.621525 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.626887 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.626975 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.627096 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.627173 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.627196 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:23.627178001 +0000 UTC m=+986.370835450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.627252 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:23.627233973 +0000 UTC m=+986.370891422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.632208 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2744300e_54a9_4fba_88a0_fe6741f88116.slice/crio-e2cef3cc82f55306b8a204a5a2ace0261fae0538fff9d0d17d090095c97c5306 WatchSource:0}: Error finding container e2cef3cc82f55306b8a204a5a2ace0261fae0538fff9d0d17d090095c97c5306: Status 404 returned error can't find the container with id e2cef3cc82f55306b8a204a5a2ace0261fae0538fff9d0d17d090095c97c5306 Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.640438 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.653983 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt"] Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.660464 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0ca95b_15e2_4d79_84d1_8600d083dbb0.slice/crio-c1ed4a720dd1f472a957a0e74b7f27d163c013e6d9ad22058e024518e0962654 WatchSource:0}: Error finding container c1ed4a720dd1f472a957a0e74b7f27d163c013e6d9ad22058e024518e0962654: Status 404 returned error can't find the container with id c1ed4a720dd1f472a957a0e74b7f27d163c013e6d9ad22058e024518e0962654 Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.793972 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt"] Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.795068 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb49c0a_bd0d_4ad4_befd_1e4a645afcc0.slice/crio-b049ca428ccbf73fe4d050c719718ca0d7c5fd1e85ed8cb29d17b89eb4f5d9df WatchSource:0}: Error finding container b049ca428ccbf73fe4d050c719718ca0d7c5fd1e85ed8cb29d17b89eb4f5d9df: Status 404 returned error can't find the container with id b049ca428ccbf73fe4d050c719718ca0d7c5fd1e85ed8cb29d17b89eb4f5d9df Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.804093 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj"] Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.814944 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4118293d_1deb_4ce3_92e8_6055d0bc5000.slice/crio-25561d66dad46eba63789632087d236d7f14004f66b29645d76bbe7d93e30bca WatchSource:0}: Error finding container 25561d66dad46eba63789632087d236d7f14004f66b29645d76bbe7d93e30bca: Status 404 returned error can't find the container with id 25561d66dad46eba63789632087d236d7f14004f66b29645d76bbe7d93e30bca Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.814994 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42"] Feb 19 08:37:22 crc kubenswrapper[4780]: W0219 08:37:22.815920 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c421e2a_bf97_429f_9cb1_8945c54d4927.slice/crio-4018c9f2b3cec3ad646e3c810be7ece337d11972f85272550213145012c034d1 WatchSource:0}: Error finding container 4018c9f2b3cec3ad646e3c810be7ece337d11972f85272550213145012c034d1: Status 404 returned error can't find the container with id 4018c9f2b3cec3ad646e3c810be7ece337d11972f85272550213145012c034d1 Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.820743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw"] Feb 19 08:37:22 crc kubenswrapper[4780]: I0219 08:37:22.825090 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv"] Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.829385 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zr9q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-dk2pj_openstack-operators(1d22904f-de9c-407e-9757-72c0eca19ea1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.830527 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" podUID="1d22904f-de9c-407e-9757-72c0eca19ea1" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.831961 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cbz6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-b6vcv_openstack-operators(36d2971c-bf26-4327-9144-f5faa7490b05): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:22 crc kubenswrapper[4780]: E0219 08:37:22.833059 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" podUID="36d2971c-bf26-4327-9144-f5faa7490b05" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.006204 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r"] Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.019453 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gwz8w"] Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.040896 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.041047 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.041096 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert podName:a2874085-9630-45db-aaa1-2e01dd53d11f nodeName:}" failed. No retries permitted until 2026-02-19 08:37:25.04108127 +0000 UTC m=+987.784738719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert") pod "infra-operator-controller-manager-79d975b745-bctgj" (UID: "a2874085-9630-45db-aaa1-2e01dd53d11f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.068276 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9"] Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.082855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" event={"ID":"4500f812-fa02-4888-8c0c-0627f7bbccf9","Type":"ContainerStarted","Data":"2f47e0a10172548bea31ed885f18eb2c09fbbd28173e96817f5d432e40eead76"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.088217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" event={"ID":"2744300e-54a9-4fba-88a0-fe6741f88116","Type":"ContainerStarted","Data":"e2cef3cc82f55306b8a204a5a2ace0261fae0538fff9d0d17d090095c97c5306"} Feb 19 08:37:23 crc kubenswrapper[4780]: W0219 08:37:23.088569 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97826b66_db76_40d7_a06f_cb6f55739cc9.slice/crio-9a17d03b74c7f75cfa0dfd0c1f92583f829c38fb091991c075b240559ba9349a WatchSource:0}: Error finding container 9a17d03b74c7f75cfa0dfd0c1f92583f829c38fb091991c075b240559ba9349a: Status 404 returned error can't find the container with id 9a17d03b74c7f75cfa0dfd0c1f92583f829c38fb091991c075b240559ba9349a Feb 19 08:37:23 crc kubenswrapper[4780]: W0219 08:37:23.089977 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a8f8f4b_597a_4eb7_b416_81ae3f73e306.slice/crio-092a3bc6709521957d01000de5ca2fd2120d1a6d46a42cfc89507803353ad891 WatchSource:0}: Error finding container 092a3bc6709521957d01000de5ca2fd2120d1a6d46a42cfc89507803353ad891: Status 404 returned error can't find the container with id 092a3bc6709521957d01000de5ca2fd2120d1a6d46a42cfc89507803353ad891 Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.091066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" event={"ID":"7fa9e6d3-35ef-4e34-908f-709a5e3980b3","Type":"ContainerStarted","Data":"e181beafdb8cc5bf88a9cefa5f60974159a8d78bcc5cafd3e25c619ec7c6fa4c"} Feb 19 08:37:23 crc kubenswrapper[4780]: W0219 08:37:23.093571 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ba6725_19ae_4588_9ff1_a9830487fa82.slice/crio-c56351f14e01f63fb1c2bf844edaf5c319ea6a6a4fbf9aa528d45bc7f09b05e0 WatchSource:0}: Error finding container c56351f14e01f63fb1c2bf844edaf5c319ea6a6a4fbf9aa528d45bc7f09b05e0: Status 404 returned error can't find the container with id c56351f14e01f63fb1c2bf844edaf5c319ea6a6a4fbf9aa528d45bc7f09b05e0 Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.093628 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" event={"ID":"ee0ca95b-15e2-4d79-84d1-8600d083dbb0","Type":"ContainerStarted","Data":"c1ed4a720dd1f472a957a0e74b7f27d163c013e6d9ad22058e024518e0962654"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.095702 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" event={"ID":"5c421e2a-bf97-429f-9cb1-8945c54d4927","Type":"ContainerStarted","Data":"4018c9f2b3cec3ad646e3c810be7ece337d11972f85272550213145012c034d1"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.099343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" event={"ID":"ad05a5f1-785e-4342-856b-e717d51e36bc","Type":"ContainerStarted","Data":"8a282ab4444b046ac618ad1475abac4878406c732466eb255576f99c61aaf31a"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.106112 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7"] Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.107793 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nhjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-gwz8w_openstack-operators(e6ba6725-19ae-4588-9ff1-a9830487fa82): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.110772 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" podUID="e6ba6725-19ae-4588-9ff1-a9830487fa82" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.112438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" event={"ID":"2153b5f8-a977-41b6-a736-659e1a71cb99","Type":"ContainerStarted","Data":"49b523c4a78400dba84bdbb82dc80aeef974ec572269c51bdac34ba5ca9e6aec"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.113673 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms"] Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.113851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" event={"ID":"4118293d-1deb-4ce3-92e8-6055d0bc5000","Type":"ContainerStarted","Data":"25561d66dad46eba63789632087d236d7f14004f66b29645d76bbe7d93e30bca"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.117316 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" event={"ID":"1d22904f-de9c-407e-9757-72c0eca19ea1","Type":"ContainerStarted","Data":"28ab25d9ac87e9937c1f3dc8ffab538d2cc79cd67a3266fb5e0d2fa66b68f707"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.117989 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb"] Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.118667 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" podUID="1d22904f-de9c-407e-9757-72c0eca19ea1" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.120441 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" event={"ID":"36d2971c-bf26-4327-9144-f5faa7490b05","Type":"ContainerStarted","Data":"f229f7eb014a40a806ec1228b81d655774543e3a166bffb5d302ba33e9266afd"} Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.127965 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" podUID="36d2971c-bf26-4327-9144-f5faa7490b05" Feb 19 08:37:23 crc kubenswrapper[4780]: W0219 08:37:23.128485 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b4b555_856b_4db0_b8e5_de61ff768cc6.slice/crio-8e5c9f169358e3dd19f301503e18c3880b9d5c4b2e04b2f13860cd4c0d568d51 WatchSource:0}: Error finding container 8e5c9f169358e3dd19f301503e18c3880b9d5c4b2e04b2f13860cd4c0d568d51: Status 404 returned error can't find the container with id 8e5c9f169358e3dd19f301503e18c3880b9d5c4b2e04b2f13860cd4c0d568d51 Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.128844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" event={"ID":"49bbe48e-3c79-422c-a85b-15198ec1a88f","Type":"ContainerStarted","Data":"c0c514387b9970f2b254065b98c32228c3efe2451c6bc9e6c144de10f4f23597"} Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.134810 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb8bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9lssb_openstack-operators(53b4b555-856b-4db0-b8e5-de61ff768cc6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.135180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" event={"ID":"0bd45130-dc60-4a0b-882d-10f9fbb742d2","Type":"ContainerStarted","Data":"6d777133beacb618da80e53456e885949c086dd844bc16ef90ca75d9b39e8538"} Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.137340 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" podUID="53b4b555-856b-4db0-b8e5-de61ff768cc6" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.138002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" event={"ID":"53377a47-fc5a-452e-84ca-235e1d71311c","Type":"ContainerStarted","Data":"5edaa2e48dd2e9cf96c9a92ee9d6cf4b9d8ecde8aef6e5f29e385aacd8f201a8"} Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.139172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" event={"ID":"efb49c0a-bd0d-4ad4-befd-1e4a645afcc0","Type":"ContainerStarted","Data":"b049ca428ccbf73fe4d050c719718ca0d7c5fd1e85ed8cb29d17b89eb4f5d9df"} Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.141017 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbgmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-lbtq7_openstack-operators(a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.142158 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" podUID="a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.155326 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjfmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-czsms_openstack-operators(629b235e-a906-442c-b653-c829f6f4e4bd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.156814 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" podUID="629b235e-a906-442c-b653-c829f6f4e4bd" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.243302 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.243446 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.243530 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert podName:506816c7-86de-45c3-800d-96fe50b629f1 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:25.243513312 +0000 UTC m=+987.987170761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" (UID: "506816c7-86de-45c3-800d-96fe50b629f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.648196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:23 crc kubenswrapper[4780]: I0219 08:37:23.648301 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.648410 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.648464 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.648504 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:25.648480039 +0000 UTC m=+988.392137488 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:23 crc kubenswrapper[4780]: E0219 08:37:23.648529 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:25.64851984 +0000 UTC m=+988.392177399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.152361 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" event={"ID":"7a8f8f4b-597a-4eb7-b416-81ae3f73e306","Type":"ContainerStarted","Data":"092a3bc6709521957d01000de5ca2fd2120d1a6d46a42cfc89507803353ad891"} Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.155506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" event={"ID":"97826b66-db76-40d7-a06f-cb6f55739cc9","Type":"ContainerStarted","Data":"9a17d03b74c7f75cfa0dfd0c1f92583f829c38fb091991c075b240559ba9349a"} Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.158215 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" event={"ID":"a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2","Type":"ContainerStarted","Data":"a92ac2dfd3d021cab1987d94dfe113e2b97e2e2fc4f8e65218cdbd61e19041fd"} Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.161696 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" podUID="a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2" Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.168322 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" event={"ID":"53b4b555-856b-4db0-b8e5-de61ff768cc6","Type":"ContainerStarted","Data":"8e5c9f169358e3dd19f301503e18c3880b9d5c4b2e04b2f13860cd4c0d568d51"} Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.171991 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" podUID="53b4b555-856b-4db0-b8e5-de61ff768cc6" Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.173756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" event={"ID":"e6ba6725-19ae-4588-9ff1-a9830487fa82","Type":"ContainerStarted","Data":"c56351f14e01f63fb1c2bf844edaf5c319ea6a6a4fbf9aa528d45bc7f09b05e0"} Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.176360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" podUID="e6ba6725-19ae-4588-9ff1-a9830487fa82" Feb 19 08:37:24 crc kubenswrapper[4780]: I0219 08:37:24.177678 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" event={"ID":"629b235e-a906-442c-b653-c829f6f4e4bd","Type":"ContainerStarted","Data":"af6094d154b2d24d2c9d938f9f63870b5dfd5cee7300040e1e30132bfa7594d0"} Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.178588 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" podUID="36d2971c-bf26-4327-9144-f5faa7490b05" Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.178721 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" podUID="1d22904f-de9c-407e-9757-72c0eca19ea1" Feb 19 08:37:24 crc kubenswrapper[4780]: E0219 08:37:24.181718 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" podUID="629b235e-a906-442c-b653-c829f6f4e4bd" Feb 19 08:37:25 crc kubenswrapper[4780]: I0219 08:37:25.075085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.075284 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.075365 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert podName:a2874085-9630-45db-aaa1-2e01dd53d11f nodeName:}" failed. No retries permitted until 2026-02-19 08:37:29.075342279 +0000 UTC m=+991.818999768 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert") pod "infra-operator-controller-manager-79d975b745-bctgj" (UID: "a2874085-9630-45db-aaa1-2e01dd53d11f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.185089 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" podUID="629b235e-a906-442c-b653-c829f6f4e4bd" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.186376 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" podUID="e6ba6725-19ae-4588-9ff1-a9830487fa82" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.192029 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" podUID="a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.192037 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" podUID="53b4b555-856b-4db0-b8e5-de61ff768cc6" Feb 19 08:37:25 crc kubenswrapper[4780]: I0219 08:37:25.278532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.278782 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.278866 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert podName:506816c7-86de-45c3-800d-96fe50b629f1 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:29.278846338 +0000 UTC m=+992.022503797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" (UID: "506816c7-86de-45c3-800d-96fe50b629f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: I0219 08:37:25.684035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:25 crc kubenswrapper[4780]: I0219 08:37:25.685212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.684361 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.685388 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:29.685373854 +0000 UTC m=+992.429031303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.685338 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:25 crc kubenswrapper[4780]: E0219 08:37:25.686162 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:29.686148533 +0000 UTC m=+992.429805982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: I0219 08:37:29.141831 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.142034 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.142309 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert podName:a2874085-9630-45db-aaa1-2e01dd53d11f nodeName:}" failed. No retries permitted until 2026-02-19 08:37:37.142290329 +0000 UTC m=+999.885947778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert") pod "infra-operator-controller-manager-79d975b745-bctgj" (UID: "a2874085-9630-45db-aaa1-2e01dd53d11f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: I0219 08:37:29.344523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.344688 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.344861 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert podName:506816c7-86de-45c3-800d-96fe50b629f1 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:37.344838204 +0000 UTC m=+1000.088495653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" (UID: "506816c7-86de-45c3-800d-96fe50b629f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: I0219 08:37:29.749319 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:29 crc kubenswrapper[4780]: I0219 08:37:29.749433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.749461 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.749523 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:37.749506104 +0000 UTC m=+1000.493163543 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.749602 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:29 crc kubenswrapper[4780]: E0219 08:37:29.749663 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:37.749647308 +0000 UTC m=+1000.493304847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:34 crc kubenswrapper[4780]: E0219 08:37:34.631724 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 08:37:34 crc kubenswrapper[4780]: E0219 08:37:34.632544 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5xsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-rxkld_openstack-operators(2744300e-54a9-4fba-88a0-fe6741f88116): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:37:34 crc kubenswrapper[4780]: E0219 08:37:34.633985 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" podUID="2744300e-54a9-4fba-88a0-fe6741f88116" Feb 19 08:37:35 crc kubenswrapper[4780]: E0219 08:37:35.159597 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 08:37:35 crc kubenswrapper[4780]: E0219 08:37:35.159901 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v9dvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-474lg_openstack-operators(ad05a5f1-785e-4342-856b-e717d51e36bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:37:35 crc kubenswrapper[4780]: E0219 08:37:35.161052 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" podUID="ad05a5f1-785e-4342-856b-e717d51e36bc" Feb 19 08:37:35 crc kubenswrapper[4780]: E0219 08:37:35.261241 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" podUID="ad05a5f1-785e-4342-856b-e717d51e36bc" Feb 19 08:37:35 crc kubenswrapper[4780]: E0219 08:37:35.265911 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" podUID="2744300e-54a9-4fba-88a0-fe6741f88116" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.260015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" event={"ID":"ee0ca95b-15e2-4d79-84d1-8600d083dbb0","Type":"ContainerStarted","Data":"0ae75ecf7ce6093ad2332ef390988d8aba4046b2fb1d5e38cdccd165823cee17"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.260489 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.262370 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" event={"ID":"4118293d-1deb-4ce3-92e8-6055d0bc5000","Type":"ContainerStarted","Data":"0365bd8c7dd19cc21f4791b0c9911a6eb080051f10f5e42d6465e38d8cd47bfd"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.262517 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.263969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" event={"ID":"efb49c0a-bd0d-4ad4-befd-1e4a645afcc0","Type":"ContainerStarted","Data":"23b619c963f54ebd6b6792f0c4a0610a6d9c5bf53d70163cf52d16e405c41a96"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.264107 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.266563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" event={"ID":"5c421e2a-bf97-429f-9cb1-8945c54d4927","Type":"ContainerStarted","Data":"6e0ad001d754f9642c5c42fc0aaeef3a976773e4bfc67226c48c5a473309b005"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.266650 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.268484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" event={"ID":"4500f812-fa02-4888-8c0c-0627f7bbccf9","Type":"ContainerStarted","Data":"eccfa903a2121f01fa4dc5f8887f8bc04867216e0c0d0990edb17d86e73aca5b"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.271767 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.274221 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" event={"ID":"7fa9e6d3-35ef-4e34-908f-709a5e3980b3","Type":"ContainerStarted","Data":"32835240c293be16678b148f2c2d3465ce3dff2cd21727198dbe73d6467bf71c"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.274378 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.279734 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" event={"ID":"7a8f8f4b-597a-4eb7-b416-81ae3f73e306","Type":"ContainerStarted","Data":"4ff3316c9e356873129827b3a887eb1ad303120576610f7984d0059c3288f5df"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.280236 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.280855 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" podStartSLOduration=2.337955325 podStartE2EDuration="15.280837416s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.67648782 +0000 UTC m=+985.420145269" lastFinishedPulling="2026-02-19 08:37:35.619369911 +0000 UTC m=+998.363027360" observedRunningTime="2026-02-19 08:37:36.277267898 +0000 UTC m=+999.020925347" watchObservedRunningTime="2026-02-19 08:37:36.280837416 +0000 UTC m=+999.024494865" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.284845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" event={"ID":"97826b66-db76-40d7-a06f-cb6f55739cc9","Type":"ContainerStarted","Data":"2127bf62c865f10737ce9653b25d0629497cb6b9a3b18bb23fd55e7fcc6beb3c"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.285651 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.290301 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" event={"ID":"2153b5f8-a977-41b6-a736-659e1a71cb99","Type":"ContainerStarted","Data":"498b9515835f4317b8d440580abc9a1e776172dd6536a032dc233ba6132a244c"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.291030 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.301059 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" podStartSLOduration=2.200249703 podStartE2EDuration="15.301038806s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.806322388 +0000 UTC m=+985.549979837" lastFinishedPulling="2026-02-19 08:37:35.907111491 +0000 UTC m=+998.650768940" observedRunningTime="2026-02-19 08:37:36.293146411 +0000 UTC m=+999.036803860" watchObservedRunningTime="2026-02-19 08:37:36.301038806 +0000 UTC m=+999.044696255" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.313089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" event={"ID":"53377a47-fc5a-452e-84ca-235e1d71311c","Type":"ContainerStarted","Data":"33d8fced6c22a6efdd71fd6691d5698b7597bbbc43b42a23957a3642491cf53d"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.313703 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.322484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" event={"ID":"49bbe48e-3c79-422c-a85b-15198ec1a88f","Type":"ContainerStarted","Data":"66f060caac7c60cdaa47285bfa78a65dc2227d3dd56bef9957c99ad06ba013e5"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.322566 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.337494 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" podStartSLOduration=2.171586894 podStartE2EDuration="15.337476736s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.454625427 +0000 UTC m=+985.198282876" lastFinishedPulling="2026-02-19 08:37:35.620515269 +0000 UTC m=+998.364172718" observedRunningTime="2026-02-19 08:37:36.320501247 +0000 UTC m=+999.064158696" watchObservedRunningTime="2026-02-19 08:37:36.337476736 +0000 UTC m=+999.081134185" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.337905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" event={"ID":"0bd45130-dc60-4a0b-882d-10f9fbb742d2","Type":"ContainerStarted","Data":"c8d0b2f124f385bdfb18f2a200b83fd52caaf2377facd472257c86cc0c2bc784"} Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.338756 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.367489 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" podStartSLOduration=2.5671025480000003 podStartE2EDuration="15.367476077s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.819020952 +0000 UTC m=+985.562678391" lastFinishedPulling="2026-02-19 08:37:35.619394471 +0000 UTC m=+998.363051920" observedRunningTime="2026-02-19 08:37:36.365800746 +0000 UTC m=+999.109458195" watchObservedRunningTime="2026-02-19 08:37:36.367476077 +0000 UTC m=+999.111133526" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.371671 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" podStartSLOduration=2.569711533 podStartE2EDuration="15.371661291s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.819063053 +0000 UTC m=+985.562720502" lastFinishedPulling="2026-02-19 08:37:35.621012811 +0000 UTC m=+998.364670260" observedRunningTime="2026-02-19 08:37:36.342374587 +0000 UTC m=+999.086032036" watchObservedRunningTime="2026-02-19 08:37:36.371661291 +0000 UTC m=+999.115318750" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.418145 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" podStartSLOduration=2.102748824 podStartE2EDuration="15.418110949s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.305450331 +0000 UTC m=+985.049107780" lastFinishedPulling="2026-02-19 08:37:35.620812456 +0000 UTC m=+998.364469905" observedRunningTime="2026-02-19 08:37:36.416413427 +0000 UTC m=+999.160070876" watchObservedRunningTime="2026-02-19 08:37:36.418110949 +0000 UTC m=+999.161768398" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.477567 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" podStartSLOduration=2.466029051 podStartE2EDuration="15.477552248s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.609509695 +0000 UTC m=+985.353167144" lastFinishedPulling="2026-02-19 08:37:35.621032892 +0000 UTC m=+998.364690341" observedRunningTime="2026-02-19 08:37:36.473251451 +0000 UTC m=+999.216908900" watchObservedRunningTime="2026-02-19 08:37:36.477552248 +0000 UTC m=+999.221209697" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.504380 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" podStartSLOduration=2.976180597 podStartE2EDuration="15.50436376s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.092184312 +0000 UTC m=+985.835841761" lastFinishedPulling="2026-02-19 08:37:35.620367475 +0000 UTC m=+998.364024924" observedRunningTime="2026-02-19 08:37:36.500603667 +0000 UTC m=+999.244261116" watchObservedRunningTime="2026-02-19 08:37:36.50436376 +0000 UTC m=+999.248021209" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.530055 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" podStartSLOduration=2.22723405 podStartE2EDuration="15.530038675s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.317404736 +0000 UTC m=+985.061062185" lastFinishedPulling="2026-02-19 08:37:35.620209361 +0000 UTC m=+998.363866810" observedRunningTime="2026-02-19 08:37:36.527845771 +0000 UTC m=+999.271503220" watchObservedRunningTime="2026-02-19 08:37:36.530038675 +0000 UTC m=+999.273696124" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.566948 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" podStartSLOduration=2.606991723 podStartE2EDuration="15.566928926s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.659790217 +0000 UTC m=+985.403447666" lastFinishedPulling="2026-02-19 08:37:35.61972742 +0000 UTC m=+998.363384869" observedRunningTime="2026-02-19 08:37:36.563093772 +0000 UTC m=+999.306751231" watchObservedRunningTime="2026-02-19 08:37:36.566928926 +0000 UTC m=+999.310586375" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.592085 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" podStartSLOduration=3.080201897 podStartE2EDuration="15.592061097s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.107515361 +0000 UTC m=+985.851172810" lastFinishedPulling="2026-02-19 08:37:35.619374561 +0000 UTC m=+998.363032010" observedRunningTime="2026-02-19 08:37:36.586711655 +0000 UTC m=+999.330369124" watchObservedRunningTime="2026-02-19 08:37:36.592061097 +0000 UTC m=+999.335718546" Feb 19 08:37:36 crc kubenswrapper[4780]: I0219 08:37:36.614393 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" podStartSLOduration=2.297960008 podStartE2EDuration="15.614371849s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.305488952 +0000 UTC m=+985.049146411" lastFinishedPulling="2026-02-19 08:37:35.621900803 +0000 UTC m=+998.365558252" observedRunningTime="2026-02-19 08:37:36.608994826 +0000 UTC m=+999.352652275" watchObservedRunningTime="2026-02-19 08:37:36.614371849 +0000 UTC m=+999.358029298" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.161981 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.176589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2874085-9630-45db-aaa1-2e01dd53d11f-cert\") pod \"infra-operator-controller-manager-79d975b745-bctgj\" (UID: \"a2874085-9630-45db-aaa1-2e01dd53d11f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.202084 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.369059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.406821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/506816c7-86de-45c3-800d-96fe50b629f1-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8\" (UID: \"506816c7-86de-45c3-800d-96fe50b629f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.486428 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.722676 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bctgj"] Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.797413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:37 crc kubenswrapper[4780]: I0219 08:37:37.797497 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:37 crc kubenswrapper[4780]: E0219 08:37:37.797650 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:37:37 crc kubenswrapper[4780]: E0219 08:37:37.797700 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:53.797685441 +0000 UTC m=+1016.541342890 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "metrics-server-cert" not found Feb 19 08:37:37 crc kubenswrapper[4780]: E0219 08:37:37.797964 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:37:37 crc kubenswrapper[4780]: E0219 08:37:37.798048 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs podName:9466b8d7-85ef-4709-ae7c-87f0bf531fe0 nodeName:}" failed. No retries permitted until 2026-02-19 08:37:53.798027639 +0000 UTC m=+1016.541685148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-djmnp" (UID: "9466b8d7-85ef-4709-ae7c-87f0bf531fe0") : secret "webhook-server-cert" not found Feb 19 08:37:38 crc kubenswrapper[4780]: I0219 08:37:38.013558 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8"] Feb 19 08:37:38 crc kubenswrapper[4780]: I0219 08:37:38.360759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" event={"ID":"506816c7-86de-45c3-800d-96fe50b629f1","Type":"ContainerStarted","Data":"9d8f17ace1a2c03d4f5a8c3a6c3ecd15e2933ebd951508d0105af5f622875cb7"} Feb 19 08:37:38 crc kubenswrapper[4780]: I0219 08:37:38.362089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" event={"ID":"a2874085-9630-45db-aaa1-2e01dd53d11f","Type":"ContainerStarted","Data":"b34b663fcc4282412493dacc5a83413276c304060f06ff3f5b67c21765266e3e"} Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.472858 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gvbzc" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.496273 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-pbw7v" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.515106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bw4n6" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.536198 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-pqwdn" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.566805 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ln8qt" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.599809 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kcn42" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.629806 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-glkqc" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.691323 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-pvff4" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.720825 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jmvjt" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.902943 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cc8zw" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.953449 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rmrs9" Feb 19 08:37:41 crc kubenswrapper[4780]: I0219 08:37:41.965221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6dr7r" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.453669 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" event={"ID":"53b4b555-856b-4db0-b8e5-de61ff768cc6","Type":"ContainerStarted","Data":"4b9f81419a5255c3bb6cce049d973a83fb78e42f8de282e5788393eb1bf6854d"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.457230 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" event={"ID":"e6ba6725-19ae-4588-9ff1-a9830487fa82","Type":"ContainerStarted","Data":"7809b04e3fd9f662770f62b386fa0e9fb13aec68b2bac46cbfc40427f62705a0"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.458585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" event={"ID":"629b235e-a906-442c-b653-c829f6f4e4bd","Type":"ContainerStarted","Data":"638950911e1774e3842fcd81cbef490a00329478c12119d455e82fd061b4ff23"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.458813 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.459049 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.460932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" event={"ID":"1d22904f-de9c-407e-9757-72c0eca19ea1","Type":"ContainerStarted","Data":"08b648628bd63ce205d66f4b9e16b3d8aaa0ce72a44f281fb23e172a60280294"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.461345 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.463262 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" event={"ID":"a2874085-9630-45db-aaa1-2e01dd53d11f","Type":"ContainerStarted","Data":"164fc32d9e9806da20bcc2d9078b6f17c90041751685b9fb6ae1366d03ec66f8"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.463732 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.473928 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" event={"ID":"a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2","Type":"ContainerStarted","Data":"788df8e083d66be565120cd18ff70cdaf24ed2e5b277a285aa857de3adf0c457"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.474585 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.489721 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" event={"ID":"36d2971c-bf26-4327-9144-f5faa7490b05","Type":"ContainerStarted","Data":"6486450b433c21ccb7df42977d9a68e8dd60b40ed0673bbecc0e69afd0c8003c"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.489915 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.494392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" event={"ID":"506816c7-86de-45c3-800d-96fe50b629f1","Type":"ContainerStarted","Data":"bbd4c7ad980744ef23f07c3887770e17347f2fb72cb81e7beb2edf4828fa12c7"} Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.495165 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.511056 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9lssb" podStartSLOduration=3.761948835 podStartE2EDuration="27.511040654s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.134708103 +0000 UTC m=+985.878365552" lastFinishedPulling="2026-02-19 08:37:46.883799922 +0000 UTC m=+1009.627457371" observedRunningTime="2026-02-19 08:37:48.470894012 +0000 UTC m=+1011.214551461" watchObservedRunningTime="2026-02-19 08:37:48.511040654 +0000 UTC m=+1011.254698113" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.517587 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" podStartSLOduration=3.233351803 podStartE2EDuration="27.517567996s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.155197359 +0000 UTC m=+985.898854808" lastFinishedPulling="2026-02-19 08:37:47.439413552 +0000 UTC m=+1010.183071001" observedRunningTime="2026-02-19 08:37:48.505533518 +0000 UTC m=+1011.249190967" watchObservedRunningTime="2026-02-19 08:37:48.517567996 +0000 UTC m=+1011.261225445" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.531675 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" podStartSLOduration=17.869539128 podStartE2EDuration="27.531660004s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:37.754310999 +0000 UTC m=+1000.497968448" lastFinishedPulling="2026-02-19 08:37:47.416431875 +0000 UTC m=+1010.160089324" observedRunningTime="2026-02-19 08:37:48.529596713 +0000 UTC m=+1011.273254172" watchObservedRunningTime="2026-02-19 08:37:48.531660004 +0000 UTC m=+1011.275317453" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.553445 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" podStartSLOduration=2.966239442 podStartE2EDuration="27.553428432s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.829190433 +0000 UTC m=+985.572847892" lastFinishedPulling="2026-02-19 08:37:47.416379433 +0000 UTC m=+1010.160036882" observedRunningTime="2026-02-19 08:37:48.545561787 +0000 UTC m=+1011.289219236" watchObservedRunningTime="2026-02-19 08:37:48.553428432 +0000 UTC m=+1011.297085881" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.581336 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" podStartSLOduration=3.308165501 podStartE2EDuration="27.581319011s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.107665955 +0000 UTC m=+985.851323404" lastFinishedPulling="2026-02-19 08:37:47.380819465 +0000 UTC m=+1010.124476914" observedRunningTime="2026-02-19 08:37:48.562530587 +0000 UTC m=+1011.306188036" watchObservedRunningTime="2026-02-19 08:37:48.581319011 +0000 UTC m=+1011.324976460" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.589035 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" podStartSLOduration=3.313420911 podStartE2EDuration="27.589015061s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:23.140879796 +0000 UTC m=+985.884537235" lastFinishedPulling="2026-02-19 08:37:47.416473936 +0000 UTC m=+1010.160131385" observedRunningTime="2026-02-19 08:37:48.583980447 +0000 UTC m=+1011.327637896" watchObservedRunningTime="2026-02-19 08:37:48.589015061 +0000 UTC m=+1011.332672510" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.611929 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" podStartSLOduration=3.06292146 podStartE2EDuration="27.611915247s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.831829508 +0000 UTC m=+985.575486957" lastFinishedPulling="2026-02-19 08:37:47.380823295 +0000 UTC m=+1010.124480744" observedRunningTime="2026-02-19 08:37:48.606897383 +0000 UTC m=+1011.350554832" watchObservedRunningTime="2026-02-19 08:37:48.611915247 +0000 UTC m=+1011.355572696" Feb 19 08:37:48 crc kubenswrapper[4780]: I0219 08:37:48.633697 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" podStartSLOduration=18.24028075 podStartE2EDuration="27.633679835s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:38.022930637 +0000 UTC m=+1000.766588086" lastFinishedPulling="2026-02-19 08:37:47.416329722 +0000 UTC m=+1010.159987171" observedRunningTime="2026-02-19 08:37:48.629459131 +0000 UTC m=+1011.373116590" watchObservedRunningTime="2026-02-19 08:37:48.633679835 +0000 UTC m=+1011.377337284" Feb 19 08:37:50 crc kubenswrapper[4780]: I0219 08:37:50.512785 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" event={"ID":"ad05a5f1-785e-4342-856b-e717d51e36bc","Type":"ContainerStarted","Data":"fe0d582c5fb3cb927e4b0867d7465f64e7c7e0ddb6459741b3b3b5d9ed1e0f87"} Feb 19 08:37:50 crc kubenswrapper[4780]: I0219 08:37:50.515147 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:37:50 crc kubenswrapper[4780]: I0219 08:37:50.533813 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" podStartSLOduration=2.745885785 podStartE2EDuration="29.533794629s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.610772426 +0000 UTC m=+985.354429875" lastFinishedPulling="2026-02-19 08:37:49.39868127 +0000 UTC m=+1012.142338719" observedRunningTime="2026-02-19 08:37:50.530234071 +0000 UTC m=+1013.273891560" watchObservedRunningTime="2026-02-19 08:37:50.533794629 +0000 UTC m=+1013.277452078" Feb 19 08:37:51 crc kubenswrapper[4780]: I0219 08:37:51.522764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" event={"ID":"2744300e-54a9-4fba-88a0-fe6741f88116","Type":"ContainerStarted","Data":"b73385d520b9079825b48da09b0c30ad7bd08f450967999ba962eead396983cb"} Feb 19 08:37:51 crc kubenswrapper[4780]: I0219 08:37:51.526656 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:37:51 crc kubenswrapper[4780]: I0219 08:37:51.543079 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" podStartSLOduration=2.778349758 podStartE2EDuration="30.54306084s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="2026-02-19 08:37:22.634550103 +0000 UTC m=+985.378207552" lastFinishedPulling="2026-02-19 08:37:50.399261175 +0000 UTC m=+1013.142918634" observedRunningTime="2026-02-19 08:37:51.538516608 +0000 UTC m=+1014.282174057" watchObservedRunningTime="2026-02-19 08:37:51.54306084 +0000 UTC m=+1014.286718309" Feb 19 08:37:52 crc kubenswrapper[4780]: I0219 08:37:52.163505 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czsms" Feb 19 08:37:52 crc kubenswrapper[4780]: I0219 08:37:52.188882 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-gwz8w" Feb 19 08:37:53 crc kubenswrapper[4780]: I0219 08:37:53.803619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:53 crc kubenswrapper[4780]: I0219 08:37:53.803789 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:53 crc kubenswrapper[4780]: I0219 08:37:53.810496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:53 crc kubenswrapper[4780]: I0219 08:37:53.811482 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466b8d7-85ef-4709-ae7c-87f0bf531fe0-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-djmnp\" (UID: \"9466b8d7-85ef-4709-ae7c-87f0bf531fe0\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:53 crc kubenswrapper[4780]: I0219 08:37:53.988841 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:54 crc kubenswrapper[4780]: I0219 08:37:54.223307 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp"] Feb 19 08:37:54 crc kubenswrapper[4780]: W0219 08:37:54.226862 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9466b8d7_85ef_4709_ae7c_87f0bf531fe0.slice/crio-5d2ec410ace4cd5a5857ded42c9635e36ea56e85f698c71fd9a6bf85a7b9bb2b WatchSource:0}: Error finding container 5d2ec410ace4cd5a5857ded42c9635e36ea56e85f698c71fd9a6bf85a7b9bb2b: Status 404 returned error can't find the container with id 5d2ec410ace4cd5a5857ded42c9635e36ea56e85f698c71fd9a6bf85a7b9bb2b Feb 19 08:37:54 crc kubenswrapper[4780]: I0219 08:37:54.546543 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" event={"ID":"9466b8d7-85ef-4709-ae7c-87f0bf531fe0","Type":"ContainerStarted","Data":"e8b294159b1d3e96fa6778e7c5106e831aba34cf9eb761eaae2c0379fece94ff"} Feb 19 08:37:54 crc kubenswrapper[4780]: I0219 08:37:54.546590 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" event={"ID":"9466b8d7-85ef-4709-ae7c-87f0bf531fe0","Type":"ContainerStarted","Data":"5d2ec410ace4cd5a5857ded42c9635e36ea56e85f698c71fd9a6bf85a7b9bb2b"} Feb 19 08:37:54 crc kubenswrapper[4780]: I0219 08:37:54.546686 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:37:54 crc kubenswrapper[4780]: I0219 08:37:54.578786 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" podStartSLOduration=33.578770907 podStartE2EDuration="33.578770907s" podCreationTimestamp="2026-02-19 08:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:37:54.577491325 +0000 UTC m=+1017.321148774" watchObservedRunningTime="2026-02-19 08:37:54.578770907 +0000 UTC m=+1017.322428356" Feb 19 08:37:57 crc kubenswrapper[4780]: I0219 08:37:57.211506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bctgj" Feb 19 08:37:57 crc kubenswrapper[4780]: I0219 08:37:57.492364 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8" Feb 19 08:38:01 crc kubenswrapper[4780]: I0219 08:38:01.650052 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-b6vcv" Feb 19 08:38:01 crc kubenswrapper[4780]: I0219 08:38:01.679564 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-474lg" Feb 19 08:38:01 crc kubenswrapper[4780]: I0219 08:38:01.720316 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rxkld" Feb 19 08:38:01 crc kubenswrapper[4780]: I0219 08:38:01.736669 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dk2pj" Feb 19 08:38:01 crc kubenswrapper[4780]: I0219 08:38:01.930919 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lbtq7" Feb 19 08:38:03 crc kubenswrapper[4780]: I0219 08:38:03.997063 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-djmnp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.522959 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.524806 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.529258 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.529459 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.529627 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4mpmd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.529754 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.536379 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.574004 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.575149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.577571 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.591752 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.591795 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxb8\" (UniqueName: \"kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.591880 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzj85\" (UniqueName: \"kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.591925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.592073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.600065 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.693551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.693586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxb8\" (UniqueName: \"kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.693617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzj85\" (UniqueName: \"kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.693638 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.693704 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.694397 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.694582 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.694806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.713228 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzj85\" (UniqueName: \"kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85\") pod \"dnsmasq-dns-855cbc58c5-2m7hd\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.714637 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxb8\" (UniqueName: \"kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8\") pod \"dnsmasq-dns-6fcf94d689-l5hpp\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.848133 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:19 crc kubenswrapper[4780]: I0219 08:38:19.901105 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:20 crc kubenswrapper[4780]: I0219 08:38:20.074108 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:20 crc kubenswrapper[4780]: I0219 08:38:20.354225 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:20 crc kubenswrapper[4780]: W0219 08:38:20.359468 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod937b2f88_5eef_4d2f_b638_62f337bbfcb2.slice/crio-a3e05c4d29253cce73f3021b6eb29e54580f9925ff61037bb488b26c4c39b975 WatchSource:0}: Error finding container a3e05c4d29253cce73f3021b6eb29e54580f9925ff61037bb488b26c4c39b975: Status 404 returned error can't find the container with id a3e05c4d29253cce73f3021b6eb29e54580f9925ff61037bb488b26c4c39b975 Feb 19 08:38:20 crc kubenswrapper[4780]: I0219 08:38:20.768806 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" event={"ID":"937b2f88-5eef-4d2f-b638-62f337bbfcb2","Type":"ContainerStarted","Data":"a3e05c4d29253cce73f3021b6eb29e54580f9925ff61037bb488b26c4c39b975"} Feb 19 08:38:20 crc kubenswrapper[4780]: I0219 08:38:20.770275 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" event={"ID":"ee0de2c6-47c4-4906-9eff-fe9e5fd98527","Type":"ContainerStarted","Data":"64b07b8761e43f81b341c486e4dd5a9e64ffe0310d6b08696fac6a6a1ff10bb4"} Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.356777 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.384261 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.385610 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.391017 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.537505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcshx\" (UniqueName: \"kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.537567 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.537611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.639066 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.639198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcshx\" (UniqueName: \"kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.639250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.640115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.640156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.668072 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcshx\" (UniqueName: \"kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx\") pod \"dnsmasq-dns-f54874ffc-qklqs\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.706883 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.742180 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.772905 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.773947 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.785913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.959091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qm2\" (UniqueName: \"kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.959241 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:22 crc kubenswrapper[4780]: I0219 08:38:22.959318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.060170 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qm2\" (UniqueName: \"kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.060243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.060281 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.061197 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.061315 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.081927 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qm2\" (UniqueName: \"kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2\") pod \"dnsmasq-dns-67ff45466c-7lqsw\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.092060 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.202860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:23 crc kubenswrapper[4780]: W0219 08:38:23.206245 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dd45d04_b13e_4452_9f53_acf41c82b84c.slice/crio-1cf1aee7e1e798eaeb961b4862dadd2c029af0a3cce859df8fadbb0cc9dd5e31 WatchSource:0}: Error finding container 1cf1aee7e1e798eaeb961b4862dadd2c029af0a3cce859df8fadbb0cc9dd5e31: Status 404 returned error can't find the container with id 1cf1aee7e1e798eaeb961b4862dadd2c029af0a3cce859df8fadbb0cc9dd5e31 Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.541631 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:23 crc kubenswrapper[4780]: W0219 08:38:23.552578 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf446bd54_9c80_4a6b_904a_402540baa0c1.slice/crio-bc1bad20c06102478332257ce6a9d42dccdcd453e16fb21529f716ccd8c76dae WatchSource:0}: Error finding container bc1bad20c06102478332257ce6a9d42dccdcd453e16fb21529f716ccd8c76dae: Status 404 returned error can't find the container with id bc1bad20c06102478332257ce6a9d42dccdcd453e16fb21529f716ccd8c76dae Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.629309 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.633326 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.636854 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.636902 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.636991 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4kmpz" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.637031 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.637010 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.637158 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.641821 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.645080 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.770917 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.770967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771033 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771048 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771083 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771144 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.771198 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.810061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" event={"ID":"f446bd54-9c80-4a6b-904a-402540baa0c1","Type":"ContainerStarted","Data":"bc1bad20c06102478332257ce6a9d42dccdcd453e16fb21529f716ccd8c76dae"} Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.812911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" event={"ID":"5dd45d04-b13e-4452-9f53-acf41c82b84c","Type":"ContainerStarted","Data":"1cf1aee7e1e798eaeb961b4862dadd2c029af0a3cce859df8fadbb0cc9dd5e31"} Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.871998 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872040 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872135 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872157 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872231 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872249 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872550 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.872646 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.873470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.873685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.874198 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.878428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.879159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.880693 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.885947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.894009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.899661 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.900770 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.903503 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.903963 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.904091 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.904289 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h45cs" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.905320 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.911754 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.911971 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.918286 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.919639 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.920152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg\") pod \"rabbitmq-server-0\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " pod="openstack/rabbitmq-server-0" Feb 19 08:38:23 crc kubenswrapper[4780]: I0219 08:38:23.967530 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077177 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077218 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077273 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077305 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077324 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077345 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077498 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077555 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077618 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077642 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4zr\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.077817 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.178993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179041 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z4zr\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179226 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179234 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179249 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179659 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179714 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.179756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.180145 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.180466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.180737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.181418 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.183871 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.184867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.185003 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.186958 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.193261 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.198432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z4zr\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.205399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.333352 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.408973 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:38:24 crc kubenswrapper[4780]: W0219 08:38:24.418633 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb814fc4c_5e70_4b85_84b0_dcfc4cd4c16d.slice/crio-1943e82197f795ca64a71f6b980217f179bb2b845d760c1f6da5d6226929a448 WatchSource:0}: Error finding container 1943e82197f795ca64a71f6b980217f179bb2b845d760c1f6da5d6226929a448: Status 404 returned error can't find the container with id 1943e82197f795ca64a71f6b980217f179bb2b845d760c1f6da5d6226929a448 Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.805708 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.821052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerStarted","Data":"1943e82197f795ca64a71f6b980217f179bb2b845d760c1f6da5d6226929a448"} Feb 19 08:38:24 crc kubenswrapper[4780]: W0219 08:38:24.832509 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc00934_94b1_4be3_8bf4_845ad08a453f.slice/crio-b281f9a655f05156eb2a34b396913745af0b56a30d288667820688b4bf1e2ced WatchSource:0}: Error finding container b281f9a655f05156eb2a34b396913745af0b56a30d288667820688b4bf1e2ced: Status 404 returned error can't find the container with id b281f9a655f05156eb2a34b396913745af0b56a30d288667820688b4bf1e2ced Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.910058 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.913633 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.916767 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bszfw" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.916867 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.917327 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.922039 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.922382 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 08:38:24 crc kubenswrapper[4780]: I0219 08:38:24.929170 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096686 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dt2z\" (UniqueName: \"kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096770 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.096791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.199760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200084 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200145 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dt2z\" (UniqueName: \"kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200183 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200199 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.200586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.201269 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.201632 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.201701 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.202338 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.205680 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.206085 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.216290 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dt2z\" (UniqueName: \"kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.223061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.229906 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.813422 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:38:25 crc kubenswrapper[4780]: I0219 08:38:25.865907 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerStarted","Data":"b281f9a655f05156eb2a34b396913745af0b56a30d288667820688b4bf1e2ced"} Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.351578 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.357829 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.360792 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.361220 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.361359 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.362429 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pbdsv" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.364143 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.531935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.531987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvbk\" (UniqueName: \"kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532014 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532290 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532331 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532377 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.532532 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.633977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634021 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634113 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634156 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634171 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634186 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvbk\" (UniqueName: \"kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634624 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.634644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.635066 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.639856 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.649623 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.652004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvbk\" (UniqueName: \"kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.676013 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.676623 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.691842 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.724851 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.726453 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.729240 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-krvrg" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.729283 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.729494 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.735583 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.836886 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.836958 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.836983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx658\" (UniqueName: \"kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.837019 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.837043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.878216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerStarted","Data":"60b46abe092b86d449446c89e2b4b1991a09ce34bdf022cdb388f39941292f30"} Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.938267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.938352 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.938403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.938463 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.938507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx658\" (UniqueName: \"kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.939923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.941039 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.961015 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.961234 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx658\" (UniqueName: \"kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.964014 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " pod="openstack/memcached-0" Feb 19 08:38:26 crc kubenswrapper[4780]: I0219 08:38:26.991226 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:27 crc kubenswrapper[4780]: I0219 08:38:27.062713 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 08:38:27 crc kubenswrapper[4780]: I0219 08:38:27.745282 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:38:27 crc kubenswrapper[4780]: I0219 08:38:27.824938 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 08:38:28 crc kubenswrapper[4780]: I0219 08:38:28.884483 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:38:28 crc kubenswrapper[4780]: I0219 08:38:28.885569 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:38:28 crc kubenswrapper[4780]: I0219 08:38:28.888423 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6fzsh" Feb 19 08:38:28 crc kubenswrapper[4780]: I0219 08:38:28.894305 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:38:28 crc kubenswrapper[4780]: I0219 08:38:28.976394 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggrr8\" (UniqueName: \"kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8\") pod \"kube-state-metrics-0\" (UID: \"9a785445-258d-4c77-a8e3-294ba1f0aca3\") " pod="openstack/kube-state-metrics-0" Feb 19 08:38:29 crc kubenswrapper[4780]: I0219 08:38:29.077776 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggrr8\" (UniqueName: \"kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8\") pod \"kube-state-metrics-0\" (UID: \"9a785445-258d-4c77-a8e3-294ba1f0aca3\") " pod="openstack/kube-state-metrics-0" Feb 19 08:38:29 crc kubenswrapper[4780]: I0219 08:38:29.101224 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggrr8\" (UniqueName: \"kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8\") pod \"kube-state-metrics-0\" (UID: \"9a785445-258d-4c77-a8e3-294ba1f0aca3\") " pod="openstack/kube-state-metrics-0" Feb 19 08:38:29 crc kubenswrapper[4780]: I0219 08:38:29.239271 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.581521 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.582963 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.585935 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-shnbj" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.586941 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.587326 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.601702 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.621647 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.623306 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.710806 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.763861 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.763921 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcwm\" (UniqueName: \"kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.763954 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.763974 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764002 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764053 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpcl\" (UniqueName: \"kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764067 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764092 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764160 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.764206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.865948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866003 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcwm\" (UniqueName: \"kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866076 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpcl\" (UniqueName: \"kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866293 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866697 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866807 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.866835 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.868190 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.870347 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.872137 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.873497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.881507 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpcl\" (UniqueName: \"kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl\") pod \"ovn-controller-nj9cs\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.885285 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcwm\" (UniqueName: \"kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm\") pod \"ovn-controller-ovs-s8k96\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.943648 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acd7c548-a04c-4556-bcae-618ae65658de","Type":"ContainerStarted","Data":"480a47c2239a2c273902d5f0f5119cdda9abaeb197d1a746aaaeff595a40b947"} Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.945799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerStarted","Data":"a33af5d578843a237aa2b608fd6424c8e4369be30a381dad4bc9bc1be48ff8e5"} Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.946333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:32 crc kubenswrapper[4780]: I0219 08:38:32.967978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.283054 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.284244 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.291771 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.291955 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.292069 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.292191 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-85jkz" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.292504 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.306140 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381321 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381371 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.381484 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.382295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.382375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztkl\" (UniqueName: \"kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483813 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483883 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztkl\" (UniqueName: \"kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483945 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.483965 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.484429 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.484773 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.484937 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.485026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.490633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.493537 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.499235 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.502412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztkl\" (UniqueName: \"kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.508836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:33 crc kubenswrapper[4780]: I0219 08:38:33.620401 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.688296 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.690054 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.692837 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.692910 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.693812 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x2npj" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.693837 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.710031 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.818835 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zsx5\" (UniqueName: \"kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.818907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.818968 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.819003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.819042 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.819073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.819155 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.819199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.920734 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.920820 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.920860 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.921751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.921763 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922170 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922332 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922365 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zsx5\" (UniqueName: \"kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.922625 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.924285 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.926837 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.927401 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.929457 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.939186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zsx5\" (UniqueName: \"kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:35 crc kubenswrapper[4780]: I0219 08:38:35.943085 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:36 crc kubenswrapper[4780]: I0219 08:38:36.025297 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 08:38:39 crc kubenswrapper[4780]: I0219 08:38:39.528652 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:38:45 crc kubenswrapper[4780]: W0219 08:38:45.507449 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a785445_258d_4c77_a8e3_294ba1f0aca3.slice/crio-48b5ea0a6fb874126eaae00a3bb03f3cdc6514dcd0139bacbdbb9a46eeadbc09 WatchSource:0}: Error finding container 48b5ea0a6fb874126eaae00a3bb03f3cdc6514dcd0139bacbdbb9a46eeadbc09: Status 404 returned error can't find the container with id 48b5ea0a6fb874126eaae00a3bb03f3cdc6514dcd0139bacbdbb9a46eeadbc09 Feb 19 08:38:45 crc kubenswrapper[4780]: I0219 08:38:45.513440 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:38:46 crc kubenswrapper[4780]: I0219 08:38:46.079288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a785445-258d-4c77-a8e3-294ba1f0aca3","Type":"ContainerStarted","Data":"48b5ea0a6fb874126eaae00a3bb03f3cdc6514dcd0139bacbdbb9a46eeadbc09"} Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.698689 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.699149 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5qm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-67ff45466c-7lqsw_openstack(f446bd54-9c80-4a6b-904a-402540baa0c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.706535 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" podUID="f446bd54-9c80-4a6b-904a-402540baa0c1" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.739928 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.740241 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcshx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f54874ffc-qklqs_openstack(5dd45d04-b13e-4452-9f53-acf41c82b84c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.741535 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" podUID="5dd45d04-b13e-4452-9f53-acf41c82b84c" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.743303 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.743440 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzj85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-855cbc58c5-2m7hd_openstack(ee0de2c6-47c4-4906-9eff-fe9e5fd98527): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.744590 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" podUID="ee0de2c6-47c4-4906-9eff-fe9e5fd98527" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.793803 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.793943 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qxb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fcf94d689-l5hpp_openstack(937b2f88-5eef-4d2f-b638-62f337bbfcb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:38:47 crc kubenswrapper[4780]: E0219 08:38:47.795250 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" podUID="937b2f88-5eef-4d2f-b638-62f337bbfcb2" Feb 19 08:38:48 crc kubenswrapper[4780]: I0219 08:38:48.103257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerStarted","Data":"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67"} Feb 19 08:38:48 crc kubenswrapper[4780]: I0219 08:38:48.106432 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerStarted","Data":"514c00fff12df406f7165a76b74f40b00b1ac7918ea0cd73c453c1c82402e66f"} Feb 19 08:38:48 crc kubenswrapper[4780]: E0219 08:38:48.107155 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" podUID="5dd45d04-b13e-4452-9f53-acf41c82b84c" Feb 19 08:38:48 crc kubenswrapper[4780]: E0219 08:38:48.108891 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" podUID="f446bd54-9c80-4a6b-904a-402540baa0c1" Feb 19 08:38:48 crc kubenswrapper[4780]: I0219 08:38:48.458257 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:38:48 crc kubenswrapper[4780]: I0219 08:38:48.614346 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:38:48 crc kubenswrapper[4780]: I0219 08:38:48.690487 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:38:49 crc kubenswrapper[4780]: I0219 08:38:49.114670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acd7c548-a04c-4556-bcae-618ae65658de","Type":"ContainerStarted","Data":"138859dce20becf173ad96258d71984b57487f1a412d44d9fd3ffe1deb62aa39"} Feb 19 08:38:49 crc kubenswrapper[4780]: I0219 08:38:49.132478 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.945305902 podStartE2EDuration="23.13245206s" podCreationTimestamp="2026-02-19 08:38:26 +0000 UTC" firstStartedPulling="2026-02-19 08:38:32.533093884 +0000 UTC m=+1055.276751333" lastFinishedPulling="2026-02-19 08:38:47.720240042 +0000 UTC m=+1070.463897491" observedRunningTime="2026-02-19 08:38:49.129688652 +0000 UTC m=+1071.873346131" watchObservedRunningTime="2026-02-19 08:38:49.13245206 +0000 UTC m=+1071.876109539" Feb 19 08:38:49 crc kubenswrapper[4780]: I0219 08:38:49.362787 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:38:50 crc kubenswrapper[4780]: I0219 08:38:50.125780 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 08:38:51 crc kubenswrapper[4780]: W0219 08:38:51.545075 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1721266_ba6d_49a4_b30d_049d4f4e1978.slice/crio-a53d68d89a6d0cd9e41606129b505dc54db5c9378f37093dae27e02e7f78a906 WatchSource:0}: Error finding container a53d68d89a6d0cd9e41606129b505dc54db5c9378f37093dae27e02e7f78a906: Status 404 returned error can't find the container with id a53d68d89a6d0cd9e41606129b505dc54db5c9378f37093dae27e02e7f78a906 Feb 19 08:38:51 crc kubenswrapper[4780]: W0219 08:38:51.548500 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac3deb5_ea1f_479c_a8a4_bbcbd48b58e9.slice/crio-8d9e72110121a571cbeddde60065cff40785696f809e8ebc480734d27f4198a2 WatchSource:0}: Error finding container 8d9e72110121a571cbeddde60065cff40785696f809e8ebc480734d27f4198a2: Status 404 returned error can't find the container with id 8d9e72110121a571cbeddde60065cff40785696f809e8ebc480734d27f4198a2 Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.602228 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.608146 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.706764 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc\") pod \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.706824 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config\") pod \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.706866 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qxb8\" (UniqueName: \"kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8\") pod \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.706930 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzj85\" (UniqueName: \"kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85\") pod \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\" (UID: \"ee0de2c6-47c4-4906-9eff-fe9e5fd98527\") " Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.706997 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config\") pod \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\" (UID: \"937b2f88-5eef-4d2f-b638-62f337bbfcb2\") " Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.707722 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config" (OuterVolumeSpecName: "config") pod "937b2f88-5eef-4d2f-b638-62f337bbfcb2" (UID: "937b2f88-5eef-4d2f-b638-62f337bbfcb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.718986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config" (OuterVolumeSpecName: "config") pod "ee0de2c6-47c4-4906-9eff-fe9e5fd98527" (UID: "ee0de2c6-47c4-4906-9eff-fe9e5fd98527"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.720383 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "937b2f88-5eef-4d2f-b638-62f337bbfcb2" (UID: "937b2f88-5eef-4d2f-b638-62f337bbfcb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.727390 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85" (OuterVolumeSpecName: "kube-api-access-hzj85") pod "ee0de2c6-47c4-4906-9eff-fe9e5fd98527" (UID: "ee0de2c6-47c4-4906-9eff-fe9e5fd98527"). InnerVolumeSpecName "kube-api-access-hzj85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.727459 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8" (OuterVolumeSpecName: "kube-api-access-8qxb8") pod "937b2f88-5eef-4d2f-b638-62f337bbfcb2" (UID: "937b2f88-5eef-4d2f-b638-62f337bbfcb2"). InnerVolumeSpecName "kube-api-access-8qxb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.809315 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.809353 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.809363 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qxb8\" (UniqueName: \"kubernetes.io/projected/937b2f88-5eef-4d2f-b638-62f337bbfcb2-kube-api-access-8qxb8\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.809373 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzj85\" (UniqueName: \"kubernetes.io/projected/ee0de2c6-47c4-4906-9eff-fe9e5fd98527-kube-api-access-hzj85\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:51 crc kubenswrapper[4780]: I0219 08:38:51.809382 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937b2f88-5eef-4d2f-b638-62f337bbfcb2-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.142758 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerStarted","Data":"4cd2a790e9ced9e9f98c43521f5bde9840c3d1cc6d36d437ecc0d00d02c54f0c"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.143999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs" event={"ID":"d1721266-ba6d-49a4-b30d-049d4f4e1978","Type":"ContainerStarted","Data":"a53d68d89a6d0cd9e41606129b505dc54db5c9378f37093dae27e02e7f78a906"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.145752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerStarted","Data":"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.147062 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" event={"ID":"937b2f88-5eef-4d2f-b638-62f337bbfcb2","Type":"ContainerDied","Data":"a3e05c4d29253cce73f3021b6eb29e54580f9925ff61037bb488b26c4c39b975"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.147158 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-l5hpp" Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.150798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerStarted","Data":"8d9e72110121a571cbeddde60065cff40785696f809e8ebc480734d27f4198a2"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.155501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerStarted","Data":"0a83fe931441676f24f3e4f0dc0927aa9482b5f65344667293aa1db23099a60f"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.158323 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.158325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-2m7hd" event={"ID":"ee0de2c6-47c4-4906-9eff-fe9e5fd98527","Type":"ContainerDied","Data":"64b07b8761e43f81b341c486e4dd5a9e64ffe0310d6b08696fac6a6a1ff10bb4"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.165794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerStarted","Data":"72b514fc5a5844ba34d80cc5567e9e8a5b704063a3ee0c1d2f21802a766c98c3"} Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.209870 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.220067 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2m7hd"] Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.266242 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:52 crc kubenswrapper[4780]: I0219 08:38:52.271313 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-l5hpp"] Feb 19 08:38:53 crc kubenswrapper[4780]: I0219 08:38:53.946160 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937b2f88-5eef-4d2f-b638-62f337bbfcb2" path="/var/lib/kubelet/pods/937b2f88-5eef-4d2f-b638-62f337bbfcb2/volumes" Feb 19 08:38:53 crc kubenswrapper[4780]: I0219 08:38:53.946692 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0de2c6-47c4-4906-9eff-fe9e5fd98527" path="/var/lib/kubelet/pods/ee0de2c6-47c4-4906-9eff-fe9e5fd98527/volumes" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.193396 4780 generic.go:334] "Generic (PLEG): container finished" podID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerID="514c00fff12df406f7165a76b74f40b00b1ac7918ea0cd73c453c1c82402e66f" exitCode=0 Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.193568 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerDied","Data":"514c00fff12df406f7165a76b74f40b00b1ac7918ea0cd73c453c1c82402e66f"} Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.197841 4780 generic.go:334] "Generic (PLEG): container finished" podID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerID="f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67" exitCode=0 Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.197871 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerDied","Data":"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67"} Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.774305 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.775184 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.777783 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.789157 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.877469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.877879 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64fg\" (UniqueName: \"kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.878104 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.878258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.878359 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.878556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.968310 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.988683 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989652 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989696 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989725 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64fg\" (UniqueName: \"kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.989825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.992768 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.993026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.993782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:55 crc kubenswrapper[4780]: I0219 08:38:55.994380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.005116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.007271 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.007436 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.009384 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.026947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64fg\" (UniqueName: \"kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg\") pod \"ovn-controller-metrics-f7dn2\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.093620 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.193674 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.193720 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ptj\" (UniqueName: \"kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.193785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.193870 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.220089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerStarted","Data":"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.238194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerStarted","Data":"fa846a4869b3acb10eb8a21741ba18e7569c9f2c7d5f843a9c97988ef168cc97"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.239071 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.254956 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.263699 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.267141 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs" event={"ID":"d1721266-ba6d-49a4-b30d-049d4f4e1978","Type":"ContainerStarted","Data":"01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.267822 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nj9cs" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.268004 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.270865 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.072085216 podStartE2EDuration="31.270840311s" podCreationTimestamp="2026-02-19 08:38:25 +0000 UTC" firstStartedPulling="2026-02-19 08:38:32.533134655 +0000 UTC m=+1055.276792124" lastFinishedPulling="2026-02-19 08:38:47.73188976 +0000 UTC m=+1070.475547219" observedRunningTime="2026-02-19 08:38:56.26027087 +0000 UTC m=+1079.003928319" watchObservedRunningTime="2026-02-19 08:38:56.270840311 +0000 UTC m=+1079.014497760" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.288986 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.299403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.299491 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.299520 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ptj\" (UniqueName: \"kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.299557 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.301092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.301616 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.308756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.332056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ptj\" (UniqueName: \"kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj\") pod \"dnsmasq-dns-57bdd75c-qgwdq\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.335531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.345642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerStarted","Data":"a465db40f9eca8dcae409a58d79d3d9cd987c42bad7e6a4443d618b97692b1e5"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.356411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a785445-258d-4c77-a8e3-294ba1f0aca3","Type":"ContainerStarted","Data":"b3f6d237e46b0b3bb57611a162e3454e7362a84be107df1aaddb897ff7b77d95"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.357365 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.372590 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nj9cs" podStartSLOduration=20.838254726 podStartE2EDuration="24.372567725s" podCreationTimestamp="2026-02-19 08:38:32 +0000 UTC" firstStartedPulling="2026-02-19 08:38:51.557764363 +0000 UTC m=+1074.301421812" lastFinishedPulling="2026-02-19 08:38:55.092077322 +0000 UTC m=+1077.835734811" observedRunningTime="2026-02-19 08:38:56.336669158 +0000 UTC m=+1079.080326617" watchObservedRunningTime="2026-02-19 08:38:56.372567725 +0000 UTC m=+1079.116225174" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.378847 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerStarted","Data":"1fcfa97a00e63654def305d1d09092ee5032ff068bd514a1b4ff17b1b6859a16"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.392306 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d459ce0-3049-4b3a-a076-682771965fc2" containerID="21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7" exitCode=0 Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.392359 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerDied","Data":"21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7"} Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.400466 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.591261346 podStartE2EDuration="33.400227108s" podCreationTimestamp="2026-02-19 08:38:23 +0000 UTC" firstStartedPulling="2026-02-19 08:38:25.84573153 +0000 UTC m=+1048.589388979" lastFinishedPulling="2026-02-19 08:38:47.654697282 +0000 UTC m=+1070.398354741" observedRunningTime="2026-02-19 08:38:56.375564409 +0000 UTC m=+1079.119221888" watchObservedRunningTime="2026-02-19 08:38:56.400227108 +0000 UTC m=+1079.143884567" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.400775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.401775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.401971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.402033 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7t9\" (UniqueName: \"kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.402063 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.408979 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.902960215 podStartE2EDuration="28.408964944s" podCreationTimestamp="2026-02-19 08:38:28 +0000 UTC" firstStartedPulling="2026-02-19 08:38:45.512985647 +0000 UTC m=+1068.256643106" lastFinishedPulling="2026-02-19 08:38:55.018990376 +0000 UTC m=+1077.762647835" observedRunningTime="2026-02-19 08:38:56.397388778 +0000 UTC m=+1079.141046227" watchObservedRunningTime="2026-02-19 08:38:56.408964944 +0000 UTC m=+1079.152622393" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.435189 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.504728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.504989 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l7t9\" (UniqueName: \"kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.505019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.505036 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.505060 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.506599 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.507254 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.508010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.510102 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.536565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l7t9\" (UniqueName: \"kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9\") pod \"dnsmasq-dns-75b7bcc64f-f9jnt\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.605622 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.605765 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config\") pod \"f446bd54-9c80-4a6b-904a-402540baa0c1\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.605889 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qm2\" (UniqueName: \"kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2\") pod \"f446bd54-9c80-4a6b-904a-402540baa0c1\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.605917 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc\") pod \"f446bd54-9c80-4a6b-904a-402540baa0c1\" (UID: \"f446bd54-9c80-4a6b-904a-402540baa0c1\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.606431 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config" (OuterVolumeSpecName: "config") pod "f446bd54-9c80-4a6b-904a-402540baa0c1" (UID: "f446bd54-9c80-4a6b-904a-402540baa0c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.606605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f446bd54-9c80-4a6b-904a-402540baa0c1" (UID: "f446bd54-9c80-4a6b-904a-402540baa0c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.609381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2" (OuterVolumeSpecName: "kube-api-access-z5qm2") pod "f446bd54-9c80-4a6b-904a-402540baa0c1" (UID: "f446bd54-9c80-4a6b-904a-402540baa0c1"). InnerVolumeSpecName "kube-api-access-z5qm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.669827 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:38:56 crc kubenswrapper[4780]: W0219 08:38:56.707109 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1c0ef0e_b38d_48e6_b006_8e528c70ff18.slice/crio-391e44d34890f5953047c2ae14116791b4fc3def25c357ccc72c4a7f8a31e01a WatchSource:0}: Error finding container 391e44d34890f5953047c2ae14116791b4fc3def25c357ccc72c4a7f8a31e01a: Status 404 returned error can't find the container with id 391e44d34890f5953047c2ae14116791b4fc3def25c357ccc72c4a7f8a31e01a Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.708246 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.708269 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qm2\" (UniqueName: \"kubernetes.io/projected/f446bd54-9c80-4a6b-904a-402540baa0c1-kube-api-access-z5qm2\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.708281 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f446bd54-9c80-4a6b-904a-402540baa0c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.732906 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.810289 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc\") pod \"5dd45d04-b13e-4452-9f53-acf41c82b84c\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.810359 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcshx\" (UniqueName: \"kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx\") pod \"5dd45d04-b13e-4452-9f53-acf41c82b84c\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.810513 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config\") pod \"5dd45d04-b13e-4452-9f53-acf41c82b84c\" (UID: \"5dd45d04-b13e-4452-9f53-acf41c82b84c\") " Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.811691 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dd45d04-b13e-4452-9f53-acf41c82b84c" (UID: "5dd45d04-b13e-4452-9f53-acf41c82b84c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.811728 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config" (OuterVolumeSpecName: "config") pod "5dd45d04-b13e-4452-9f53-acf41c82b84c" (UID: "5dd45d04-b13e-4452-9f53-acf41c82b84c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.816755 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx" (OuterVolumeSpecName: "kube-api-access-kcshx") pod "5dd45d04-b13e-4452-9f53-acf41c82b84c" (UID: "5dd45d04-b13e-4452-9f53-acf41c82b84c"). InnerVolumeSpecName "kube-api-access-kcshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.901439 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:38:56 crc kubenswrapper[4780]: W0219 08:38:56.904624 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61a83eea_3e30_4edd_b066_f0805faa7746.slice/crio-fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9 WatchSource:0}: Error finding container fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9: Status 404 returned error can't find the container with id fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9 Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.912432 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.912463 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcshx\" (UniqueName: \"kubernetes.io/projected/5dd45d04-b13e-4452-9f53-acf41c82b84c-kube-api-access-kcshx\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.912473 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd45d04-b13e-4452-9f53-acf41c82b84c-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.991809 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:56 crc kubenswrapper[4780]: I0219 08:38:56.992204 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.065913 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.080739 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.400495 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" event={"ID":"61a83eea-3e30-4edd-b066-f0805faa7746","Type":"ContainerStarted","Data":"fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.403802 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerStarted","Data":"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.403850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerStarted","Data":"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.404006 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.404240 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.404826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" event={"ID":"f446bd54-9c80-4a6b-904a-402540baa0c1","Type":"ContainerDied","Data":"bc1bad20c06102478332257ce6a9d42dccdcd453e16fb21529f716ccd8c76dae"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.404863 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-7lqsw" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.405611 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f7dn2" event={"ID":"b1c0ef0e-b38d-48e6-b006-8e528c70ff18","Type":"ContainerStarted","Data":"391e44d34890f5953047c2ae14116791b4fc3def25c357ccc72c4a7f8a31e01a"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.407714 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.409253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-qklqs" event={"ID":"5dd45d04-b13e-4452-9f53-acf41c82b84c","Type":"ContainerDied","Data":"1cf1aee7e1e798eaeb961b4862dadd2c029af0a3cce859df8fadbb0cc9dd5e31"} Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.427563 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-s8k96" podStartSLOduration=21.909562948 podStartE2EDuration="25.427518933s" podCreationTimestamp="2026-02-19 08:38:32 +0000 UTC" firstStartedPulling="2026-02-19 08:38:51.557479405 +0000 UTC m=+1074.301136854" lastFinishedPulling="2026-02-19 08:38:55.07543536 +0000 UTC m=+1077.819092839" observedRunningTime="2026-02-19 08:38:57.422324765 +0000 UTC m=+1080.165982214" watchObservedRunningTime="2026-02-19 08:38:57.427518933 +0000 UTC m=+1080.171176392" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.474579 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.509482 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-7lqsw"] Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.534133 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.543096 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-qklqs"] Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.959454 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd45d04-b13e-4452-9f53-acf41c82b84c" path="/var/lib/kubelet/pods/5dd45d04-b13e-4452-9f53-acf41c82b84c/volumes" Feb 19 08:38:57 crc kubenswrapper[4780]: I0219 08:38:57.960061 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f446bd54-9c80-4a6b-904a-402540baa0c1" path="/var/lib/kubelet/pods/f446bd54-9c80-4a6b-904a-402540baa0c1/volumes" Feb 19 08:38:58 crc kubenswrapper[4780]: I0219 08:38:58.415889 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" event={"ID":"28f993ea-95bb-4158-a953-ba8d3f1ec097","Type":"ContainerStarted","Data":"5410e6f237d42884f48bda68cfe846324a8dc82ec47b6c4011c3c995f2b06e6c"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.225977 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.256490 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.257716 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.272987 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.356134 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.356210 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.356375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.356500 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.356682 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj28t\" (UniqueName: \"kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.441276 4780 generic.go:334] "Generic (PLEG): container finished" podID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerID="a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811" exitCode=0 Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.441429 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" event={"ID":"28f993ea-95bb-4158-a953-ba8d3f1ec097","Type":"ContainerDied","Data":"a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.445629 4780 generic.go:334] "Generic (PLEG): container finished" podID="61a83eea-3e30-4edd-b066-f0805faa7746" containerID="aedad8db6270e7b7f406cb48d87a502c5a2ff429b1aee674b0362761c15e245b" exitCode=0 Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.445697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" event={"ID":"61a83eea-3e30-4edd-b066-f0805faa7746","Type":"ContainerDied","Data":"aedad8db6270e7b7f406cb48d87a502c5a2ff429b1aee674b0362761c15e245b"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.453352 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerStarted","Data":"e1c828e53372b01ed0b60ce38962b074fb08a9c3280ed7aac4fbb5ce93ddbb17"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.456109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerStarted","Data":"773278a211535ef5f5087e03e60839045ff960f93bda10d3048f86bb4c48be1b"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.457899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.458101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.458240 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj28t\" (UniqueName: \"kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.458357 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.458492 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.459416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.460036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f7dn2" event={"ID":"b1c0ef0e-b38d-48e6-b006-8e528c70ff18","Type":"ContainerStarted","Data":"3410061fc16202d0f292ee59bc88124956f67382228dfe5e24ccf6f91f2ce7cf"} Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.460388 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.460976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.471533 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.486071 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj28t\" (UniqueName: \"kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t\") pod \"dnsmasq-dns-689df5d84f-vdsk7\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.490731 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.938447262 podStartE2EDuration="27.490713909s" podCreationTimestamp="2026-02-19 08:38:32 +0000 UTC" firstStartedPulling="2026-02-19 08:38:51.558023349 +0000 UTC m=+1074.301680798" lastFinishedPulling="2026-02-19 08:38:58.110289976 +0000 UTC m=+1080.853947445" observedRunningTime="2026-02-19 08:38:59.481238244 +0000 UTC m=+1082.224895693" watchObservedRunningTime="2026-02-19 08:38:59.490713909 +0000 UTC m=+1082.234371358" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.547204 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.020636163 podStartE2EDuration="25.547186804s" podCreationTimestamp="2026-02-19 08:38:34 +0000 UTC" firstStartedPulling="2026-02-19 08:38:51.561613628 +0000 UTC m=+1074.305271077" lastFinishedPulling="2026-02-19 08:38:58.088164269 +0000 UTC m=+1080.831821718" observedRunningTime="2026-02-19 08:38:59.540440127 +0000 UTC m=+1082.284097576" watchObservedRunningTime="2026-02-19 08:38:59.547186804 +0000 UTC m=+1082.290844253" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.559939 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-f7dn2" podStartSLOduration=3.185751802 podStartE2EDuration="4.559923329s" podCreationTimestamp="2026-02-19 08:38:55 +0000 UTC" firstStartedPulling="2026-02-19 08:38:56.708968428 +0000 UTC m=+1079.452625877" lastFinishedPulling="2026-02-19 08:38:58.083139955 +0000 UTC m=+1080.826797404" observedRunningTime="2026-02-19 08:38:59.557732835 +0000 UTC m=+1082.301390284" watchObservedRunningTime="2026-02-19 08:38:59.559923329 +0000 UTC m=+1082.303580778" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.581500 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.720447 4780 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 08:38:59 crc kubenswrapper[4780]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/28f993ea-95bb-4158-a953-ba8d3f1ec097/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 08:38:59 crc kubenswrapper[4780]: > podSandboxID="5410e6f237d42884f48bda68cfe846324a8dc82ec47b6c4011c3c995f2b06e6c" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.721066 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:38:59 crc kubenswrapper[4780]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l7t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-75b7bcc64f-f9jnt_openstack(28f993ea-95bb-4158-a953-ba8d3f1ec097): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/28f993ea-95bb-4158-a953-ba8d3f1ec097/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 08:38:59 crc kubenswrapper[4780]: > logger="UnhandledError" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.722163 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/28f993ea-95bb-4158-a953-ba8d3f1ec097/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.760760 4780 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 08:38:59 crc kubenswrapper[4780]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/61a83eea-3e30-4edd-b066-f0805faa7746/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 08:38:59 crc kubenswrapper[4780]: > podSandboxID="fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.760916 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:38:59 crc kubenswrapper[4780]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bh65dh95hf6h595hf6hf5h59dh6h57dh558h55ch5dbh5f5h565h5f7h9fh76h58ch54dh84h59bh7fh6bh5b9h59h67fh566h56h5f4h554h58fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7ptj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57bdd75c-qgwdq_openstack(61a83eea-3e30-4edd-b066-f0805faa7746): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/61a83eea-3e30-4edd-b066-f0805faa7746/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 08:38:59 crc kubenswrapper[4780]: > logger="UnhandledError" Feb 19 08:38:59 crc kubenswrapper[4780]: E0219 08:38:59.763020 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/61a83eea-3e30-4edd-b066-f0805faa7746/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" podUID="61a83eea-3e30-4edd-b066-f0805faa7746" Feb 19 08:38:59 crc kubenswrapper[4780]: I0219 08:38:59.888969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:38:59 crc kubenswrapper[4780]: W0219 08:38:59.904502 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55561d8e_be0c_4936_895f_f31d1654cb8f.slice/crio-1ea8e289050f0fdc2b641d3a6f2f131d02d576d95a5f24f3ed71d456653de405 WatchSource:0}: Error finding container 1ea8e289050f0fdc2b641d3a6f2f131d02d576d95a5f24f3ed71d456653de405: Status 404 returned error can't find the container with id 1ea8e289050f0fdc2b641d3a6f2f131d02d576d95a5f24f3ed71d456653de405 Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.025924 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.069905 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.391739 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.399420 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.402137 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.402182 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.402513 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bv2wd" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.404566 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.406544 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.467944 4780 generic.go:334] "Generic (PLEG): container finished" podID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerID="1fa75cebf640685724bc728308305bd8bdc9c2b2c79d78320bb500c96f439103" exitCode=0 Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.468037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" event={"ID":"55561d8e-be0c-4936-895f-f31d1654cb8f","Type":"ContainerDied","Data":"1fa75cebf640685724bc728308305bd8bdc9c2b2c79d78320bb500c96f439103"} Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.468074 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" event={"ID":"55561d8e-be0c-4936-895f-f31d1654cb8f","Type":"ContainerStarted","Data":"1ea8e289050f0fdc2b641d3a6f2f131d02d576d95a5f24f3ed71d456653de405"} Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.471190 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.546749 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578574 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwpq\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.578697 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.621736 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681499 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681594 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwpq\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: E0219 08:39:00.681847 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 08:39:00 crc kubenswrapper[4780]: E0219 08:39:00.681872 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.681905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: E0219 08:39:00.681928 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift podName:81f6be70-b99e-42e2-ada9-535daa67785c nodeName:}" failed. No retries permitted until 2026-02-19 08:39:01.181902845 +0000 UTC m=+1083.925560294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift") pod "swift-storage-0" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c") : configmap "swift-ring-files" not found Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.682154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.682379 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.684542 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.694233 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.710174 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwpq\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.743556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.758743 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.883465 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config\") pod \"61a83eea-3e30-4edd-b066-f0805faa7746\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.883557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ptj\" (UniqueName: \"kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj\") pod \"61a83eea-3e30-4edd-b066-f0805faa7746\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.883585 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb\") pod \"61a83eea-3e30-4edd-b066-f0805faa7746\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.883711 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc\") pod \"61a83eea-3e30-4edd-b066-f0805faa7746\" (UID: \"61a83eea-3e30-4edd-b066-f0805faa7746\") " Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.886887 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj" (OuterVolumeSpecName: "kube-api-access-k7ptj") pod "61a83eea-3e30-4edd-b066-f0805faa7746" (UID: "61a83eea-3e30-4edd-b066-f0805faa7746"). InnerVolumeSpecName "kube-api-access-k7ptj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.926091 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rbjfc"] Feb 19 08:39:00 crc kubenswrapper[4780]: E0219 08:39:00.926526 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a83eea-3e30-4edd-b066-f0805faa7746" containerName="init" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.926538 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a83eea-3e30-4edd-b066-f0805faa7746" containerName="init" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.926755 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a83eea-3e30-4edd-b066-f0805faa7746" containerName="init" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.927334 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.940836 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rbjfc"] Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.957515 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.958149 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.958236 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.969708 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61a83eea-3e30-4edd-b066-f0805faa7746" (UID: "61a83eea-3e30-4edd-b066-f0805faa7746"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.969774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61a83eea-3e30-4edd-b066-f0805faa7746" (UID: "61a83eea-3e30-4edd-b066-f0805faa7746"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.971720 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config" (OuterVolumeSpecName: "config") pod "61a83eea-3e30-4edd-b066-f0805faa7746" (UID: "61a83eea-3e30-4edd-b066-f0805faa7746"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.985676 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.985703 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.985714 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ptj\" (UniqueName: \"kubernetes.io/projected/61a83eea-3e30-4edd-b066-f0805faa7746-kube-api-access-k7ptj\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:00 crc kubenswrapper[4780]: I0219 08:39:00.985724 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61a83eea-3e30-4edd-b066-f0805faa7746-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.086873 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.086923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.087038 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.087225 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.087293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.087321 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.087453 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcmr\" (UniqueName: \"kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189272 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcmr\" (UniqueName: \"kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: E0219 08:39:01.189327 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.189374 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: E0219 08:39:01.189355 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 08:39:01 crc kubenswrapper[4780]: E0219 08:39:01.189518 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift podName:81f6be70-b99e-42e2-ada9-535daa67785c nodeName:}" failed. No retries permitted until 2026-02-19 08:39:02.189488387 +0000 UTC m=+1084.933145836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift") pod "swift-storage-0" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c") : configmap "swift-ring-files" not found Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.190110 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.190115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.190321 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.192428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.192554 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.194368 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.205377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcmr\" (UniqueName: \"kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr\") pod \"swift-ring-rebalance-rbjfc\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.279893 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.485868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" event={"ID":"28f993ea-95bb-4158-a953-ba8d3f1ec097","Type":"ContainerStarted","Data":"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d"} Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.487232 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.489802 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" event={"ID":"61a83eea-3e30-4edd-b066-f0805faa7746","Type":"ContainerDied","Data":"fa5edecd8a909fc84e40a27541182700c969ef35b69eb7ed8e4ae6204d5cdab9"} Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.489844 4780 scope.go:117] "RemoveContainer" containerID="aedad8db6270e7b7f406cb48d87a502c5a2ff429b1aee674b0362761c15e245b" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.489968 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-qgwdq" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.498357 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rbjfc"] Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.506227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" event={"ID":"55561d8e-be0c-4936-895f-f31d1654cb8f","Type":"ContainerStarted","Data":"08ba14a622f38f53deafda19fd51a8bc3be19364ea4ae11205cda0f38702178d"} Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.508440 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.508579 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.511104 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" podStartSLOduration=5.077792376 podStartE2EDuration="5.511082244s" podCreationTimestamp="2026-02-19 08:38:56 +0000 UTC" firstStartedPulling="2026-02-19 08:38:58.028770111 +0000 UTC m=+1080.772427560" lastFinishedPulling="2026-02-19 08:38:58.462059979 +0000 UTC m=+1081.205717428" observedRunningTime="2026-02-19 08:39:01.505615529 +0000 UTC m=+1084.249272998" watchObservedRunningTime="2026-02-19 08:39:01.511082244 +0000 UTC m=+1084.254739693" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.532603 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" podStartSLOduration=2.532588576 podStartE2EDuration="2.532588576s" podCreationTimestamp="2026-02-19 08:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:01.532073893 +0000 UTC m=+1084.275731352" watchObservedRunningTime="2026-02-19 08:39:01.532588576 +0000 UTC m=+1084.276246035" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.576291 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.582955 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.588380 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-qgwdq"] Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.809000 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.810384 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.813509 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.813653 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cw5hv" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.813783 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.814285 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.834152 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906677 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906707 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906763 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.906838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.907059 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msv56\" (UniqueName: \"kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:01 crc kubenswrapper[4780]: I0219 08:39:01.946931 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a83eea-3e30-4edd-b066-f0805faa7746" path="/var/lib/kubelet/pods/61a83eea-3e30-4edd-b066-f0805faa7746/volumes" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008644 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008730 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008768 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008823 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.008903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msv56\" (UniqueName: \"kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.009952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.010525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.013440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.022858 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.023070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.023457 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.029100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msv56\" (UniqueName: \"kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56\") pod \"ovn-northd-0\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.126488 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.212388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:02 crc kubenswrapper[4780]: E0219 08:39:02.212637 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 08:39:02 crc kubenswrapper[4780]: E0219 08:39:02.212655 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 08:39:02 crc kubenswrapper[4780]: E0219 08:39:02.212704 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift podName:81f6be70-b99e-42e2-ada9-535daa67785c nodeName:}" failed. No retries permitted until 2026-02-19 08:39:04.212687922 +0000 UTC m=+1086.956345371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift") pod "swift-storage-0" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c") : configmap "swift-ring-files" not found Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.518582 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbjfc" event={"ID":"690f441d-627e-4fc9-aee6-069a9d11946f","Type":"ContainerStarted","Data":"cfa0866e3886db58ee20549b2efebc527fa0fbaa97e7721dfd2bb2b9d72135df"} Feb 19 08:39:02 crc kubenswrapper[4780]: I0219 08:39:02.577772 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:39:02 crc kubenswrapper[4780]: W0219 08:39:02.591723 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c517061_49de_445a_955e_006cbf09b6fd.slice/crio-7b285ca0f694d9ce1daaa28a431008337d22f2a74c7a26a06abd3f5453e288fc WatchSource:0}: Error finding container 7b285ca0f694d9ce1daaa28a431008337d22f2a74c7a26a06abd3f5453e288fc: Status 404 returned error can't find the container with id 7b285ca0f694d9ce1daaa28a431008337d22f2a74c7a26a06abd3f5453e288fc Feb 19 08:39:03 crc kubenswrapper[4780]: I0219 08:39:03.094226 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 08:39:03 crc kubenswrapper[4780]: I0219 08:39:03.171541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 08:39:03 crc kubenswrapper[4780]: I0219 08:39:03.534084 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerStarted","Data":"7b285ca0f694d9ce1daaa28a431008337d22f2a74c7a26a06abd3f5453e288fc"} Feb 19 08:39:04 crc kubenswrapper[4780]: I0219 08:39:04.262180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:04 crc kubenswrapper[4780]: E0219 08:39:04.262379 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 08:39:04 crc kubenswrapper[4780]: E0219 08:39:04.262414 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 08:39:04 crc kubenswrapper[4780]: E0219 08:39:04.262479 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift podName:81f6be70-b99e-42e2-ada9-535daa67785c nodeName:}" failed. No retries permitted until 2026-02-19 08:39:08.262459766 +0000 UTC m=+1091.006117215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift") pod "swift-storage-0" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c") : configmap "swift-ring-files" not found Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.273443 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.273508 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.362991 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.422170 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wl8hp"] Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.423253 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.425000 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.430792 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wl8hp"] Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.481646 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.481771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.556320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbjfc" event={"ID":"690f441d-627e-4fc9-aee6-069a9d11946f","Type":"ContainerStarted","Data":"71c4933d930bf50c88e918b9407c4855c895d0148329d0083c50ac79c8bebef9"} Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.579572 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rbjfc" podStartSLOduration=2.110966158 podStartE2EDuration="5.579552452s" podCreationTimestamp="2026-02-19 08:39:00 +0000 UTC" firstStartedPulling="2026-02-19 08:39:01.522264411 +0000 UTC m=+1084.265921880" lastFinishedPulling="2026-02-19 08:39:04.990850735 +0000 UTC m=+1087.734508174" observedRunningTime="2026-02-19 08:39:05.571506684 +0000 UTC m=+1088.315164133" watchObservedRunningTime="2026-02-19 08:39:05.579552452 +0000 UTC m=+1088.323209901" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.583673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.583743 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.584707 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.601984 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h\") pod \"root-account-create-update-wl8hp\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.678289 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 08:39:05 crc kubenswrapper[4780]: I0219 08:39:05.749656 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.337237 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.337590 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.371257 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wl8hp"] Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.609974 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.616234 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerStarted","Data":"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980"} Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.616287 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerStarted","Data":"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7"} Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.616419 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.617734 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl8hp" event={"ID":"7eddfd0e-0c25-4ea0-83a9-01f411602182","Type":"ContainerStarted","Data":"fc596295b74a45703ec81bfbed11e159ebdeaef402c38baf205599082b54771b"} Feb 19 08:39:06 crc kubenswrapper[4780]: I0219 08:39:06.667865 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.760250535 podStartE2EDuration="5.667844826s" podCreationTimestamp="2026-02-19 08:39:01 +0000 UTC" firstStartedPulling="2026-02-19 08:39:02.594442776 +0000 UTC m=+1085.338100225" lastFinishedPulling="2026-02-19 08:39:05.502037067 +0000 UTC m=+1088.245694516" observedRunningTime="2026-02-19 08:39:06.654322892 +0000 UTC m=+1089.397980341" watchObservedRunningTime="2026-02-19 08:39:06.667844826 +0000 UTC m=+1089.411502275" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.223972 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4hz99"] Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.235216 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4hz99"] Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.235324 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.341816 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ef86-account-create-update-6fgrs"] Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.342990 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.346297 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.354625 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ef86-account-create-update-6fgrs"] Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.425501 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.425546 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhd2n\" (UniqueName: \"kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.527425 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.527693 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.527757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhd2n\" (UniqueName: \"kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.528069 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlvc\" (UniqueName: \"kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.529327 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.568476 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhd2n\" (UniqueName: \"kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n\") pod \"glance-db-create-4hz99\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.626964 4780 generic.go:334] "Generic (PLEG): container finished" podID="7eddfd0e-0c25-4ea0-83a9-01f411602182" containerID="f8d6415b61380e5d7e78f85a4160c7b86ef8975d68a5fbf9fbda2814a02de3b0" exitCode=0 Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.627069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl8hp" event={"ID":"7eddfd0e-0c25-4ea0-83a9-01f411602182","Type":"ContainerDied","Data":"f8d6415b61380e5d7e78f85a4160c7b86ef8975d68a5fbf9fbda2814a02de3b0"} Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.628983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlvc\" (UniqueName: \"kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.629042 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.630035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.674712 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlvc\" (UniqueName: \"kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc\") pod \"glance-ef86-account-create-update-6fgrs\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.862634 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4hz99" Feb 19 08:39:07 crc kubenswrapper[4780]: I0219 08:39:07.963299 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.136586 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-w7s5g"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.138207 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.149203 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3c22-account-create-update-p5jfv"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.150482 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.152499 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.154322 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-w7s5g"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.165180 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3c22-account-create-update-p5jfv"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.262062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.262436 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhrn\" (UniqueName: \"kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.262557 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2l6w\" (UniqueName: \"kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.262694 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.344915 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gbfmd"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.347097 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.361076 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7aae-account-create-update-q7lfm"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.362451 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364081 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvrvp\" (UniqueName: \"kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364307 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2fx\" (UniqueName: \"kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364708 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364815 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.364977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhrn\" (UniqueName: \"kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.365290 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.365463 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2l6w\" (UniqueName: \"kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.365597 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 08:39:08 crc kubenswrapper[4780]: E0219 08:39:08.365929 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 08:39:08 crc kubenswrapper[4780]: E0219 08:39:08.366016 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 08:39:08 crc kubenswrapper[4780]: E0219 08:39:08.366113 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift podName:81f6be70-b99e-42e2-ada9-535daa67785c nodeName:}" failed. No retries permitted until 2026-02-19 08:39:16.366099132 +0000 UTC m=+1099.109756581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift") pod "swift-storage-0" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c") : configmap "swift-ring-files" not found Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.366513 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.367038 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.377929 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbfmd"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.388951 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7aae-account-create-update-q7lfm"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.393814 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhrn\" (UniqueName: \"kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn\") pod \"keystone-3c22-account-create-update-p5jfv\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.401062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2l6w\" (UniqueName: \"kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w\") pod \"keystone-db-create-w7s5g\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.433089 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4hz99"] Feb 19 08:39:08 crc kubenswrapper[4780]: W0219 08:39:08.439990 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4139c0c2_1d42_4e2d_89ac_240b1719eb16.slice/crio-eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c WatchSource:0}: Error finding container eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c: Status 404 returned error can't find the container with id eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.468893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2fx\" (UniqueName: \"kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.469244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.470086 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.471758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.471873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvrvp\" (UniqueName: \"kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.471934 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.472825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.473012 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.488512 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2fx\" (UniqueName: \"kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx\") pod \"placement-7aae-account-create-update-q7lfm\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.497746 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvrvp\" (UniqueName: \"kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp\") pod \"placement-db-create-gbfmd\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: W0219 08:39:08.544691 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a5fe54c_d700_4f46_9091_f9f3d4bca327.slice/crio-361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf WatchSource:0}: Error finding container 361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf: Status 404 returned error can't find the container with id 361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.552325 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ef86-account-create-update-6fgrs"] Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.640820 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4hz99" event={"ID":"4139c0c2-1d42-4e2d-89ac-240b1719eb16","Type":"ContainerStarted","Data":"eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c"} Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.643480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef86-account-create-update-6fgrs" event={"ID":"4a5fe54c-d700-4f46-9091-f9f3d4bca327","Type":"ContainerStarted","Data":"361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf"} Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.683075 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:08 crc kubenswrapper[4780]: I0219 08:39:08.691791 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.050105 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3c22-account-create-update-p5jfv"] Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.247787 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.271405 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.300164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h\") pod \"7eddfd0e-0c25-4ea0-83a9-01f411602182\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.300282 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts\") pod \"7eddfd0e-0c25-4ea0-83a9-01f411602182\" (UID: \"7eddfd0e-0c25-4ea0-83a9-01f411602182\") " Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.300955 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7eddfd0e-0c25-4ea0-83a9-01f411602182" (UID: "7eddfd0e-0c25-4ea0-83a9-01f411602182"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.305774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h" (OuterVolumeSpecName: "kube-api-access-jbc6h") pod "7eddfd0e-0c25-4ea0-83a9-01f411602182" (UID: "7eddfd0e-0c25-4ea0-83a9-01f411602182"). InnerVolumeSpecName "kube-api-access-jbc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.363847 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-w7s5g"] Feb 19 08:39:09 crc kubenswrapper[4780]: W0219 08:39:09.395401 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ad6a771_c42f_4893_9d53_488723d532b1.slice/crio-8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae WatchSource:0}: Error finding container 8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae: Status 404 returned error can't find the container with id 8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.402068 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbc6h\" (UniqueName: \"kubernetes.io/projected/7eddfd0e-0c25-4ea0-83a9-01f411602182-kube-api-access-jbc6h\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.402092 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eddfd0e-0c25-4ea0-83a9-01f411602182-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.493783 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7aae-account-create-update-q7lfm"] Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.520402 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbfmd"] Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.583438 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.647890 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.648184 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="dnsmasq-dns" containerID="cri-o://ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d" gracePeriod=10 Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.658439 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl8hp" event={"ID":"7eddfd0e-0c25-4ea0-83a9-01f411602182","Type":"ContainerDied","Data":"fc596295b74a45703ec81bfbed11e159ebdeaef402c38baf205599082b54771b"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.658475 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc596295b74a45703ec81bfbed11e159ebdeaef402c38baf205599082b54771b" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.658538 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl8hp" Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.666605 4780 generic.go:334] "Generic (PLEG): container finished" podID="4a5fe54c-d700-4f46-9091-f9f3d4bca327" containerID="8a4822323cbe0de91a7339cdab1edaa575463e00b274ef17baf200c5215124e9" exitCode=0 Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.666702 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef86-account-create-update-6fgrs" event={"ID":"4a5fe54c-d700-4f46-9091-f9f3d4bca327","Type":"ContainerDied","Data":"8a4822323cbe0de91a7339cdab1edaa575463e00b274ef17baf200c5215124e9"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.669986 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbfmd" event={"ID":"9cef886d-8b12-490f-9860-de378fc3d6fb","Type":"ContainerStarted","Data":"80cc2a4865d270fafd4178fbaa2da3994d35a578150bb0c0df0eea9e6a63b693"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.671627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-w7s5g" event={"ID":"5ad6a771-c42f-4893-9d53-488723d532b1","Type":"ContainerStarted","Data":"b350240a4378abb9db72f535c5c98f2291baf23455875cdea354e0f4ed27661f"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.671688 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-w7s5g" event={"ID":"5ad6a771-c42f-4893-9d53-488723d532b1","Type":"ContainerStarted","Data":"8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.672634 4780 generic.go:334] "Generic (PLEG): container finished" podID="c6e5a977-7deb-4a69-b388-8050af25ae68" containerID="cc27237022f809cd63a9926b55592579736e25ba63040266efa1bf7931831723" exitCode=0 Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.672699 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c22-account-create-update-p5jfv" event={"ID":"c6e5a977-7deb-4a69-b388-8050af25ae68","Type":"ContainerDied","Data":"cc27237022f809cd63a9926b55592579736e25ba63040266efa1bf7931831723"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.672725 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c22-account-create-update-p5jfv" event={"ID":"c6e5a977-7deb-4a69-b388-8050af25ae68","Type":"ContainerStarted","Data":"4a6a086457fdcb6c783ce33045e4cfd20bc82300a7029173a0a59cc67ebaf922"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.676989 4780 generic.go:334] "Generic (PLEG): container finished" podID="4139c0c2-1d42-4e2d-89ac-240b1719eb16" containerID="5c1a8193962c4508a41af80a87835fe20e9486a20d8cc9ca42f8fb94ff2a53a8" exitCode=0 Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.677034 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4hz99" event={"ID":"4139c0c2-1d42-4e2d-89ac-240b1719eb16","Type":"ContainerDied","Data":"5c1a8193962c4508a41af80a87835fe20e9486a20d8cc9ca42f8fb94ff2a53a8"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.678399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7aae-account-create-update-q7lfm" event={"ID":"d0f3e284-d8e1-4544-99e7-ac76fe479470","Type":"ContainerStarted","Data":"658e487df17229bb5319b7672a401fb6afabb475b186a6d1ae3415b17c3224b4"} Feb 19 08:39:09 crc kubenswrapper[4780]: I0219 08:39:09.743276 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-w7s5g" podStartSLOduration=1.7432618739999999 podStartE2EDuration="1.743261874s" podCreationTimestamp="2026-02-19 08:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:09.73864042 +0000 UTC m=+1092.482297869" watchObservedRunningTime="2026-02-19 08:39:09.743261874 +0000 UTC m=+1092.486919323" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.226320 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.329988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc\") pod \"28f993ea-95bb-4158-a953-ba8d3f1ec097\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.330070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb\") pod \"28f993ea-95bb-4158-a953-ba8d3f1ec097\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.330153 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config\") pod \"28f993ea-95bb-4158-a953-ba8d3f1ec097\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.330223 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb\") pod \"28f993ea-95bb-4158-a953-ba8d3f1ec097\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.331036 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l7t9\" (UniqueName: \"kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9\") pod \"28f993ea-95bb-4158-a953-ba8d3f1ec097\" (UID: \"28f993ea-95bb-4158-a953-ba8d3f1ec097\") " Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.335751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9" (OuterVolumeSpecName: "kube-api-access-7l7t9") pod "28f993ea-95bb-4158-a953-ba8d3f1ec097" (UID: "28f993ea-95bb-4158-a953-ba8d3f1ec097"). InnerVolumeSpecName "kube-api-access-7l7t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.371295 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28f993ea-95bb-4158-a953-ba8d3f1ec097" (UID: "28f993ea-95bb-4158-a953-ba8d3f1ec097"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.376696 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config" (OuterVolumeSpecName: "config") pod "28f993ea-95bb-4158-a953-ba8d3f1ec097" (UID: "28f993ea-95bb-4158-a953-ba8d3f1ec097"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.378469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28f993ea-95bb-4158-a953-ba8d3f1ec097" (UID: "28f993ea-95bb-4158-a953-ba8d3f1ec097"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.399303 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28f993ea-95bb-4158-a953-ba8d3f1ec097" (UID: "28f993ea-95bb-4158-a953-ba8d3f1ec097"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.432851 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.432884 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.432895 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.432905 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f993ea-95bb-4158-a953-ba8d3f1ec097-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.432914 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l7t9\" (UniqueName: \"kubernetes.io/projected/28f993ea-95bb-4158-a953-ba8d3f1ec097-kube-api-access-7l7t9\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.688173 4780 generic.go:334] "Generic (PLEG): container finished" podID="5ad6a771-c42f-4893-9d53-488723d532b1" containerID="b350240a4378abb9db72f535c5c98f2291baf23455875cdea354e0f4ed27661f" exitCode=0 Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.688236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-w7s5g" event={"ID":"5ad6a771-c42f-4893-9d53-488723d532b1","Type":"ContainerDied","Data":"b350240a4378abb9db72f535c5c98f2291baf23455875cdea354e0f4ed27661f"} Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.689968 4780 generic.go:334] "Generic (PLEG): container finished" podID="d0f3e284-d8e1-4544-99e7-ac76fe479470" containerID="6780cd8bdf4cd22bd3d12173f36b7b9af8652b38cd77695bf228aa1707a08ee0" exitCode=0 Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.690012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7aae-account-create-update-q7lfm" event={"ID":"d0f3e284-d8e1-4544-99e7-ac76fe479470","Type":"ContainerDied","Data":"6780cd8bdf4cd22bd3d12173f36b7b9af8652b38cd77695bf228aa1707a08ee0"} Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.691731 4780 generic.go:334] "Generic (PLEG): container finished" podID="9cef886d-8b12-490f-9860-de378fc3d6fb" containerID="f49589050a71a1e8ab881d86d68db619d9b1db7fd6611fa5d97f0caa42d5b8d7" exitCode=0 Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.691848 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbfmd" event={"ID":"9cef886d-8b12-490f-9860-de378fc3d6fb","Type":"ContainerDied","Data":"f49589050a71a1e8ab881d86d68db619d9b1db7fd6611fa5d97f0caa42d5b8d7"} Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.696091 4780 generic.go:334] "Generic (PLEG): container finished" podID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerID="ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d" exitCode=0 Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.696130 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" event={"ID":"28f993ea-95bb-4158-a953-ba8d3f1ec097","Type":"ContainerDied","Data":"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d"} Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.696163 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" event={"ID":"28f993ea-95bb-4158-a953-ba8d3f1ec097","Type":"ContainerDied","Data":"5410e6f237d42884f48bda68cfe846324a8dc82ec47b6c4011c3c995f2b06e6c"} Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.696179 4780 scope.go:117] "RemoveContainer" containerID="ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.696369 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-f9jnt" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.741550 4780 scope.go:117] "RemoveContainer" containerID="a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.793260 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.797789 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-f9jnt"] Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.802281 4780 scope.go:117] "RemoveContainer" containerID="ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d" Feb 19 08:39:10 crc kubenswrapper[4780]: E0219 08:39:10.803760 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d\": container with ID starting with ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d not found: ID does not exist" containerID="ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.803803 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d"} err="failed to get container status \"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d\": rpc error: code = NotFound desc = could not find container \"ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d\": container with ID starting with ca79a701479d8ac21aa5ab96581ca9523b1f5728fcba9417a13ccb47edd51c8d not found: ID does not exist" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.803832 4780 scope.go:117] "RemoveContainer" containerID="a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811" Feb 19 08:39:10 crc kubenswrapper[4780]: E0219 08:39:10.804242 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811\": container with ID starting with a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811 not found: ID does not exist" containerID="a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811" Feb 19 08:39:10 crc kubenswrapper[4780]: I0219 08:39:10.804301 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811"} err="failed to get container status \"a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811\": rpc error: code = NotFound desc = could not find container \"a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811\": container with ID starting with a1b62e62377b1ca0f4c36a970ca38b5ec9669624b889b7129c869a57ebfa2811 not found: ID does not exist" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.226196 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.231612 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.238721 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4hz99" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.245140 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhrn\" (UniqueName: \"kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn\") pod \"c6e5a977-7deb-4a69-b388-8050af25ae68\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.245255 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts\") pod \"c6e5a977-7deb-4a69-b388-8050af25ae68\" (UID: \"c6e5a977-7deb-4a69-b388-8050af25ae68\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.246360 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6e5a977-7deb-4a69-b388-8050af25ae68" (UID: "c6e5a977-7deb-4a69-b388-8050af25ae68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.286394 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn" (OuterVolumeSpecName: "kube-api-access-vxhrn") pod "c6e5a977-7deb-4a69-b388-8050af25ae68" (UID: "c6e5a977-7deb-4a69-b388-8050af25ae68"). InnerVolumeSpecName "kube-api-access-vxhrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346321 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts\") pod \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhd2n\" (UniqueName: \"kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n\") pod \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqlvc\" (UniqueName: \"kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc\") pod \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\" (UID: \"4a5fe54c-d700-4f46-9091-f9f3d4bca327\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346491 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts\") pod \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\" (UID: \"4139c0c2-1d42-4e2d-89ac-240b1719eb16\") " Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346821 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a5fe54c-d700-4f46-9091-f9f3d4bca327" (UID: "4a5fe54c-d700-4f46-9091-f9f3d4bca327"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346878 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhrn\" (UniqueName: \"kubernetes.io/projected/c6e5a977-7deb-4a69-b388-8050af25ae68-kube-api-access-vxhrn\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.346896 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e5a977-7deb-4a69-b388-8050af25ae68-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.347262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4139c0c2-1d42-4e2d-89ac-240b1719eb16" (UID: "4139c0c2-1d42-4e2d-89ac-240b1719eb16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.350191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc" (OuterVolumeSpecName: "kube-api-access-wqlvc") pod "4a5fe54c-d700-4f46-9091-f9f3d4bca327" (UID: "4a5fe54c-d700-4f46-9091-f9f3d4bca327"). InnerVolumeSpecName "kube-api-access-wqlvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.350617 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n" (OuterVolumeSpecName: "kube-api-access-jhd2n") pod "4139c0c2-1d42-4e2d-89ac-240b1719eb16" (UID: "4139c0c2-1d42-4e2d-89ac-240b1719eb16"). InnerVolumeSpecName "kube-api-access-jhd2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.448085 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5fe54c-d700-4f46-9091-f9f3d4bca327-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.448110 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhd2n\" (UniqueName: \"kubernetes.io/projected/4139c0c2-1d42-4e2d-89ac-240b1719eb16-kube-api-access-jhd2n\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.448130 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqlvc\" (UniqueName: \"kubernetes.io/projected/4a5fe54c-d700-4f46-9091-f9f3d4bca327-kube-api-access-wqlvc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.448141 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4139c0c2-1d42-4e2d-89ac-240b1719eb16-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.706567 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c22-account-create-update-p5jfv" event={"ID":"c6e5a977-7deb-4a69-b388-8050af25ae68","Type":"ContainerDied","Data":"4a6a086457fdcb6c783ce33045e4cfd20bc82300a7029173a0a59cc67ebaf922"} Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.706633 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6a086457fdcb6c783ce33045e4cfd20bc82300a7029173a0a59cc67ebaf922" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.706605 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-p5jfv" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.708417 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4hz99" event={"ID":"4139c0c2-1d42-4e2d-89ac-240b1719eb16","Type":"ContainerDied","Data":"eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c"} Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.708486 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eabc5c658c97a5464daf4be5b352c431da6a1121bcf3b3adf484229dcf20863c" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.708440 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4hz99" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.710636 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef86-account-create-update-6fgrs" event={"ID":"4a5fe54c-d700-4f46-9091-f9f3d4bca327","Type":"ContainerDied","Data":"361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf"} Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.710695 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="361fdee80f495d3d319124a0353f387d6fb5de1f3a2323699d8bf448b485eddf" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.710651 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef86-account-create-update-6fgrs" Feb 19 08:39:11 crc kubenswrapper[4780]: I0219 08:39:11.952866 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" path="/var/lib/kubelet/pods/28f993ea-95bb-4158-a953-ba8d3f1ec097/volumes" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.136708 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.161197 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2fx\" (UniqueName: \"kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx\") pod \"d0f3e284-d8e1-4544-99e7-ac76fe479470\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.161361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts\") pod \"d0f3e284-d8e1-4544-99e7-ac76fe479470\" (UID: \"d0f3e284-d8e1-4544-99e7-ac76fe479470\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.162366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0f3e284-d8e1-4544-99e7-ac76fe479470" (UID: "d0f3e284-d8e1-4544-99e7-ac76fe479470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.181689 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx" (OuterVolumeSpecName: "kube-api-access-st2fx") pod "d0f3e284-d8e1-4544-99e7-ac76fe479470" (UID: "d0f3e284-d8e1-4544-99e7-ac76fe479470"). InnerVolumeSpecName "kube-api-access-st2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.263410 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f3e284-d8e1-4544-99e7-ac76fe479470-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.263448 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2fx\" (UniqueName: \"kubernetes.io/projected/d0f3e284-d8e1-4544-99e7-ac76fe479470-kube-api-access-st2fx\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.289691 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.294224 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.364342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts\") pod \"5ad6a771-c42f-4893-9d53-488723d532b1\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.364483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2l6w\" (UniqueName: \"kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w\") pod \"5ad6a771-c42f-4893-9d53-488723d532b1\" (UID: \"5ad6a771-c42f-4893-9d53-488723d532b1\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.364511 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvrvp\" (UniqueName: \"kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp\") pod \"9cef886d-8b12-490f-9860-de378fc3d6fb\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.364549 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts\") pod \"9cef886d-8b12-490f-9860-de378fc3d6fb\" (UID: \"9cef886d-8b12-490f-9860-de378fc3d6fb\") " Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.365194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cef886d-8b12-490f-9860-de378fc3d6fb" (UID: "9cef886d-8b12-490f-9860-de378fc3d6fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.365528 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ad6a771-c42f-4893-9d53-488723d532b1" (UID: "5ad6a771-c42f-4893-9d53-488723d532b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.368085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w" (OuterVolumeSpecName: "kube-api-access-m2l6w") pod "5ad6a771-c42f-4893-9d53-488723d532b1" (UID: "5ad6a771-c42f-4893-9d53-488723d532b1"). InnerVolumeSpecName "kube-api-access-m2l6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.368470 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp" (OuterVolumeSpecName: "kube-api-access-dvrvp") pod "9cef886d-8b12-490f-9860-de378fc3d6fb" (UID: "9cef886d-8b12-490f-9860-de378fc3d6fb"). InnerVolumeSpecName "kube-api-access-dvrvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.467583 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2l6w\" (UniqueName: \"kubernetes.io/projected/5ad6a771-c42f-4893-9d53-488723d532b1-kube-api-access-m2l6w\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.467621 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvrvp\" (UniqueName: \"kubernetes.io/projected/9cef886d-8b12-490f-9860-de378fc3d6fb-kube-api-access-dvrvp\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.467635 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cef886d-8b12-490f-9860-de378fc3d6fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.467650 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad6a771-c42f-4893-9d53-488723d532b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.722333 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7aae-account-create-update-q7lfm" event={"ID":"d0f3e284-d8e1-4544-99e7-ac76fe479470","Type":"ContainerDied","Data":"658e487df17229bb5319b7672a401fb6afabb475b186a6d1ae3415b17c3224b4"} Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.722793 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="658e487df17229bb5319b7672a401fb6afabb475b186a6d1ae3415b17c3224b4" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.722354 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-q7lfm" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.724584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbfmd" event={"ID":"9cef886d-8b12-490f-9860-de378fc3d6fb","Type":"ContainerDied","Data":"80cc2a4865d270fafd4178fbaa2da3994d35a578150bb0c0df0eea9e6a63b693"} Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.724631 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80cc2a4865d270fafd4178fbaa2da3994d35a578150bb0c0df0eea9e6a63b693" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.724691 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbfmd" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.727855 4780 generic.go:334] "Generic (PLEG): container finished" podID="690f441d-627e-4fc9-aee6-069a9d11946f" containerID="71c4933d930bf50c88e918b9407c4855c895d0148329d0083c50ac79c8bebef9" exitCode=0 Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.727900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbjfc" event={"ID":"690f441d-627e-4fc9-aee6-069a9d11946f","Type":"ContainerDied","Data":"71c4933d930bf50c88e918b9407c4855c895d0148329d0083c50ac79c8bebef9"} Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.729961 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-w7s5g" event={"ID":"5ad6a771-c42f-4893-9d53-488723d532b1","Type":"ContainerDied","Data":"8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae"} Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.730176 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b496a8361212691b97c445af091eb9683abcc9d3d58fb4e24dd3466a4765cae" Feb 19 08:39:12 crc kubenswrapper[4780]: I0219 08:39:12.730028 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-w7s5g" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.876662 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wl8hp"] Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.884319 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wl8hp"] Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.900989 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l4w4k"] Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901369 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4139c0c2-1d42-4e2d-89ac-240b1719eb16" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901392 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4139c0c2-1d42-4e2d-89ac-240b1719eb16" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901420 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e5a977-7deb-4a69-b388-8050af25ae68" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901429 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e5a977-7deb-4a69-b388-8050af25ae68" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901448 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="init" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901456 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="init" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901476 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f3e284-d8e1-4544-99e7-ac76fe479470" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901484 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f3e284-d8e1-4544-99e7-ac76fe479470" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901497 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="dnsmasq-dns" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901504 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="dnsmasq-dns" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901515 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eddfd0e-0c25-4ea0-83a9-01f411602182" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901522 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eddfd0e-0c25-4ea0-83a9-01f411602182" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901536 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5fe54c-d700-4f46-9091-f9f3d4bca327" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901543 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5fe54c-d700-4f46-9091-f9f3d4bca327" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901557 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad6a771-c42f-4893-9d53-488723d532b1" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901564 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad6a771-c42f-4893-9d53-488723d532b1" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: E0219 08:39:13.901578 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cef886d-8b12-490f-9860-de378fc3d6fb" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cef886d-8b12-490f-9860-de378fc3d6fb" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901770 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5fe54c-d700-4f46-9091-f9f3d4bca327" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901788 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad6a771-c42f-4893-9d53-488723d532b1" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901798 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eddfd0e-0c25-4ea0-83a9-01f411602182" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901812 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f993ea-95bb-4158-a953-ba8d3f1ec097" containerName="dnsmasq-dns" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901822 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cef886d-8b12-490f-9860-de378fc3d6fb" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901832 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e5a977-7deb-4a69-b388-8050af25ae68" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901842 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f3e284-d8e1-4544-99e7-ac76fe479470" containerName="mariadb-account-create-update" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.901854 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4139c0c2-1d42-4e2d-89ac-240b1719eb16" containerName="mariadb-database-create" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.902516 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.905400 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.919886 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4w4k"] Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.952383 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eddfd0e-0c25-4ea0-83a9-01f411602182" path="/var/lib/kubelet/pods/7eddfd0e-0c25-4ea0-83a9-01f411602182/volumes" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.996010 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:13 crc kubenswrapper[4780]: I0219 08:39:13.996090 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tst54\" (UniqueName: \"kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.075817 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.096970 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097021 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097078 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097103 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097198 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097230 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgcmr\" (UniqueName: \"kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr\") pod \"690f441d-627e-4fc9-aee6-069a9d11946f\" (UID: \"690f441d-627e-4fc9-aee6-069a9d11946f\") " Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.097591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tst54\" (UniqueName: \"kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.098015 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.098785 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.099389 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.106295 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr" (OuterVolumeSpecName: "kube-api-access-xgcmr") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "kube-api-access-xgcmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.110537 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.121109 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.121653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.125868 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tst54\" (UniqueName: \"kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54\") pod \"root-account-create-update-l4w4k\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.128556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts" (OuterVolumeSpecName: "scripts") pod "690f441d-627e-4fc9-aee6-069a9d11946f" (UID: "690f441d-627e-4fc9-aee6-069a9d11946f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.199817 4780 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200017 4780 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200077 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgcmr\" (UniqueName: \"kubernetes.io/projected/690f441d-627e-4fc9-aee6-069a9d11946f-kube-api-access-xgcmr\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200152 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200724 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/690f441d-627e-4fc9-aee6-069a9d11946f-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200850 4780 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/690f441d-627e-4fc9-aee6-069a9d11946f-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.200908 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690f441d-627e-4fc9-aee6-069a9d11946f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.217057 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.710402 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4w4k"] Feb 19 08:39:14 crc kubenswrapper[4780]: W0219 08:39:14.711268 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a8ae866_88c8_4503_98ca_751c5ae1ef56.slice/crio-aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701 WatchSource:0}: Error finding container aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701: Status 404 returned error can't find the container with id aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701 Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.748211 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4w4k" event={"ID":"8a8ae866-88c8-4503-98ca-751c5ae1ef56","Type":"ContainerStarted","Data":"aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701"} Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.750052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbjfc" event={"ID":"690f441d-627e-4fc9-aee6-069a9d11946f","Type":"ContainerDied","Data":"cfa0866e3886db58ee20549b2efebc527fa0fbaa97e7721dfd2bb2b9d72135df"} Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.750117 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa0866e3886db58ee20549b2efebc527fa0fbaa97e7721dfd2bb2b9d72135df" Feb 19 08:39:14 crc kubenswrapper[4780]: I0219 08:39:14.750155 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbjfc" Feb 19 08:39:15 crc kubenswrapper[4780]: I0219 08:39:15.759251 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4w4k" event={"ID":"8a8ae866-88c8-4503-98ca-751c5ae1ef56","Type":"ContainerStarted","Data":"13d9e63a67f4cabf81e3b618c9388b5c9c94f117d10fd2292810d3542a2eb1dd"} Feb 19 08:39:15 crc kubenswrapper[4780]: I0219 08:39:15.780366 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-l4w4k" podStartSLOduration=2.780348009 podStartE2EDuration="2.780348009s" podCreationTimestamp="2026-02-19 08:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:15.771855159 +0000 UTC m=+1098.515512608" watchObservedRunningTime="2026-02-19 08:39:15.780348009 +0000 UTC m=+1098.524005458" Feb 19 08:39:16 crc kubenswrapper[4780]: I0219 08:39:16.438401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:16 crc kubenswrapper[4780]: I0219 08:39:16.446635 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"swift-storage-0\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " pod="openstack/swift-storage-0" Feb 19 08:39:16 crc kubenswrapper[4780]: I0219 08:39:16.620809 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.189048 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.365463 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rm7g2"] Feb 19 08:39:17 crc kubenswrapper[4780]: E0219 08:39:17.366031 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690f441d-627e-4fc9-aee6-069a9d11946f" containerName="swift-ring-rebalance" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.366093 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="690f441d-627e-4fc9-aee6-069a9d11946f" containerName="swift-ring-rebalance" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.366324 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="690f441d-627e-4fc9-aee6-069a9d11946f" containerName="swift-ring-rebalance" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.366837 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.370148 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.370200 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xmvzm" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.411699 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rm7g2"] Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.452393 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcrd\" (UniqueName: \"kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.452678 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.452887 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.452932 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.554290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.554392 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.554418 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.554450 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcrd\" (UniqueName: \"kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.560477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.560498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.560949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.570992 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcrd\" (UniqueName: \"kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd\") pod \"glance-db-sync-rm7g2\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.692819 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.800761 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"e693c06a94c81a6b75dd845d4b0b434065c56c5699f43e57240307a4ebbfc295"} Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.802415 4780 generic.go:334] "Generic (PLEG): container finished" podID="8a8ae866-88c8-4503-98ca-751c5ae1ef56" containerID="13d9e63a67f4cabf81e3b618c9388b5c9c94f117d10fd2292810d3542a2eb1dd" exitCode=0 Feb 19 08:39:17 crc kubenswrapper[4780]: I0219 08:39:17.802439 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4w4k" event={"ID":"8a8ae866-88c8-4503-98ca-751c5ae1ef56","Type":"ContainerDied","Data":"13d9e63a67f4cabf81e3b618c9388b5c9c94f117d10fd2292810d3542a2eb1dd"} Feb 19 08:39:18 crc kubenswrapper[4780]: I0219 08:39:18.322550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rm7g2"] Feb 19 08:39:18 crc kubenswrapper[4780]: I0219 08:39:18.814396 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"4f00dfcd9db180a87181a3c3d01eba45c08d63f7da35c02e1f8eb76e78aba164"} Feb 19 08:39:18 crc kubenswrapper[4780]: I0219 08:39:18.814691 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"cc54bc275542f23253910477331cac8c186c8fe35ad45eef8b4392d021ab1bd6"} Feb 19 08:39:18 crc kubenswrapper[4780]: I0219 08:39:18.815650 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rm7g2" event={"ID":"8049af4a-37d2-4155-924d-74ddba48cde8","Type":"ContainerStarted","Data":"f2b7f9e4563ed62692923437af75bf701ec2f8458e270e5c04023c292ab2997f"} Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.094439 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.184213 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts\") pod \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.184477 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tst54\" (UniqueName: \"kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54\") pod \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\" (UID: \"8a8ae866-88c8-4503-98ca-751c5ae1ef56\") " Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.186099 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a8ae866-88c8-4503-98ca-751c5ae1ef56" (UID: "8a8ae866-88c8-4503-98ca-751c5ae1ef56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.189365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54" (OuterVolumeSpecName: "kube-api-access-tst54") pod "8a8ae866-88c8-4503-98ca-751c5ae1ef56" (UID: "8a8ae866-88c8-4503-98ca-751c5ae1ef56"). InnerVolumeSpecName "kube-api-access-tst54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.285564 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8ae866-88c8-4503-98ca-751c5ae1ef56-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.285593 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tst54\" (UniqueName: \"kubernetes.io/projected/8a8ae866-88c8-4503-98ca-751c5ae1ef56-kube-api-access-tst54\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.826205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"fca6d50f8c31e787a22a0309a33a9f858675c6427c2b2a393662ddffb55cb5cd"} Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.826571 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"cfdce07de9a5a0bf8e587fc69e8297c91b8913ec02aceb794adf35e345ef0d13"} Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.828809 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4w4k" event={"ID":"8a8ae866-88c8-4503-98ca-751c5ae1ef56","Type":"ContainerDied","Data":"aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701"} Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.828841 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad695f417405c5fa7f3582a7748a62a4f20d553a851d869bd63c61d9fe0c701" Feb 19 08:39:19 crc kubenswrapper[4780]: I0219 08:39:19.828919 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4w4k" Feb 19 08:39:20 crc kubenswrapper[4780]: I0219 08:39:20.853364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"a84d96cb9630fe50228a686a5d919a25906bd58fdea9bb24ab6c2a2aa7322132"} Feb 19 08:39:20 crc kubenswrapper[4780]: I0219 08:39:20.853698 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"877371d495b56baac057d9902da78b62d27253e5f8d0b6e010c755c8c3c39f70"} Feb 19 08:39:20 crc kubenswrapper[4780]: I0219 08:39:20.853712 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"b664265cd69a21ec8233ee532fb4e98fecc2e465f25adc32ed09379a81449626"} Feb 19 08:39:21 crc kubenswrapper[4780]: I0219 08:39:21.864372 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"1ebc68436865188cf2fad212eae5f1403c1f801aa9e70f545ee26cbab63c85e3"} Feb 19 08:39:22 crc kubenswrapper[4780]: I0219 08:39:22.184827 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 08:39:22 crc kubenswrapper[4780]: I0219 08:39:22.902118 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"931137663cf98f50b611ab7a350ff29c75096b512f3bd8479a3da859ff9249dd"} Feb 19 08:39:22 crc kubenswrapper[4780]: I0219 08:39:22.902447 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"e346987639804be13b9078ad7625d170b9ddd9507142084a4d071af736f1e9e5"} Feb 19 08:39:22 crc kubenswrapper[4780]: I0219 08:39:22.902457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"cd76a02c1c0dd51248f9e5d74c516ac9964b548ece3e6feef086a11ee77f79b3"} Feb 19 08:39:23 crc kubenswrapper[4780]: I0219 08:39:23.914656 4780 generic.go:334] "Generic (PLEG): container finished" podID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerID="92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11" exitCode=0 Feb 19 08:39:23 crc kubenswrapper[4780]: I0219 08:39:23.914742 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerDied","Data":"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11"} Feb 19 08:39:23 crc kubenswrapper[4780]: I0219 08:39:23.917947 4780 generic.go:334] "Generic (PLEG): container finished" podID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerID="72b514fc5a5844ba34d80cc5567e9e8a5b704063a3ee0c1d2f21802a766c98c3" exitCode=0 Feb 19 08:39:23 crc kubenswrapper[4780]: I0219 08:39:23.917979 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerDied","Data":"72b514fc5a5844ba34d80cc5567e9e8a5b704063a3ee0c1d2f21802a766c98c3"} Feb 19 08:39:27 crc kubenswrapper[4780]: I0219 08:39:27.996943 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nj9cs" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" probeResult="failure" output=< Feb 19 08:39:27 crc kubenswrapper[4780]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 08:39:27 crc kubenswrapper[4780]: > Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.017207 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.018546 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.252448 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nj9cs-config-2z5h6"] Feb 19 08:39:28 crc kubenswrapper[4780]: E0219 08:39:28.252830 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8ae866-88c8-4503-98ca-751c5ae1ef56" containerName="mariadb-account-create-update" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.252851 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8ae866-88c8-4503-98ca-751c5ae1ef56" containerName="mariadb-account-create-update" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.253020 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8ae866-88c8-4503-98ca-751c5ae1ef56" containerName="mariadb-account-create-update" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.253582 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.255110 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.258721 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nj9cs-config-2z5h6"] Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.392703 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.393011 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbjh\" (UniqueName: \"kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.393043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.393100 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.393295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.393437 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.494888 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.495369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.495551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbjh\" (UniqueName: \"kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.495679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.495765 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.496095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.496325 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.496558 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.496567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.497341 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.498099 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.515380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbjh\" (UniqueName: \"kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh\") pod \"ovn-controller-nj9cs-config-2z5h6\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:28 crc kubenswrapper[4780]: I0219 08:39:28.573935 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:29 crc kubenswrapper[4780]: I0219 08:39:29.971419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerStarted","Data":"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156"} Feb 19 08:39:29 crc kubenswrapper[4780]: I0219 08:39:29.972795 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:39:29 crc kubenswrapper[4780]: I0219 08:39:29.986686 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"7308fc7b05b12c3aded56d1b465656996edb4a1aaac742755f2267f8856a1738"} Feb 19 08:39:29 crc kubenswrapper[4780]: I0219 08:39:29.988896 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerStarted","Data":"7c909b0dbce18b4a1334fd4ddf863413080b8c52e4f0a329f074299164d924ec"} Feb 19 08:39:29 crc kubenswrapper[4780]: I0219 08:39:29.989244 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 08:39:30 crc kubenswrapper[4780]: I0219 08:39:30.008783 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.11522853 podStartE2EDuration="1m8.008765033s" podCreationTimestamp="2026-02-19 08:38:22 +0000 UTC" firstStartedPulling="2026-02-19 08:38:24.835749072 +0000 UTC m=+1047.579406521" lastFinishedPulling="2026-02-19 08:38:47.729285565 +0000 UTC m=+1070.472943024" observedRunningTime="2026-02-19 08:39:30.005174925 +0000 UTC m=+1112.748832374" watchObservedRunningTime="2026-02-19 08:39:30.008765033 +0000 UTC m=+1112.752422483" Feb 19 08:39:30 crc kubenswrapper[4780]: I0219 08:39:30.037146 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.799628951 podStartE2EDuration="1m8.037118834s" podCreationTimestamp="2026-02-19 08:38:22 +0000 UTC" firstStartedPulling="2026-02-19 08:38:24.42533812 +0000 UTC m=+1047.168995569" lastFinishedPulling="2026-02-19 08:38:47.662828003 +0000 UTC m=+1070.406485452" observedRunningTime="2026-02-19 08:39:30.028277046 +0000 UTC m=+1112.771934495" watchObservedRunningTime="2026-02-19 08:39:30.037118834 +0000 UTC m=+1112.780776283" Feb 19 08:39:30 crc kubenswrapper[4780]: I0219 08:39:30.109883 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nj9cs-config-2z5h6"] Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.005172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"2188a10120d8bd63dfe375b654626303aec9847b21eac95df015ea5bd7642279"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.005542 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"356b8ecfe832de6a4352add852082b58adb0d47998d76bb8be15bf9b809ca6ba"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.005557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerStarted","Data":"c564e5d19b14dd5427df8bd9f7c31fa51688096980104c53a0592b494393444e"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.007099 4780 generic.go:334] "Generic (PLEG): container finished" podID="c8d5a12d-1914-4247-9979-75124c4aef2a" containerID="6ca9e8ee84cdf435111e299c530163a4bd65ba3935f02d328d063f1fd872d17a" exitCode=0 Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.007188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs-config-2z5h6" event={"ID":"c8d5a12d-1914-4247-9979-75124c4aef2a","Type":"ContainerDied","Data":"6ca9e8ee84cdf435111e299c530163a4bd65ba3935f02d328d063f1fd872d17a"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.007256 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs-config-2z5h6" event={"ID":"c8d5a12d-1914-4247-9979-75124c4aef2a","Type":"ContainerStarted","Data":"056526ae1975c3b67a4e4aa0a0e89c21d945ec7866901ba2ee22256bd74f9e0b"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.009624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rm7g2" event={"ID":"8049af4a-37d2-4155-924d-74ddba48cde8","Type":"ContainerStarted","Data":"1e831d18d35ae8f44d6b8a238c4cab50690385fc043885c5021b32e17ebeb213"} Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.069311 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=27.082350086 podStartE2EDuration="32.069286701s" podCreationTimestamp="2026-02-19 08:38:59 +0000 UTC" firstStartedPulling="2026-02-19 08:39:17.193675974 +0000 UTC m=+1099.937333433" lastFinishedPulling="2026-02-19 08:39:22.180612599 +0000 UTC m=+1104.924270048" observedRunningTime="2026-02-19 08:39:31.060445222 +0000 UTC m=+1113.804102701" watchObservedRunningTime="2026-02-19 08:39:31.069286701 +0000 UTC m=+1113.812944170" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.098463 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rm7g2" podStartSLOduration=2.792320761 podStartE2EDuration="14.098444321s" podCreationTimestamp="2026-02-19 08:39:17 +0000 UTC" firstStartedPulling="2026-02-19 08:39:18.393790681 +0000 UTC m=+1101.137448130" lastFinishedPulling="2026-02-19 08:39:29.699914241 +0000 UTC m=+1112.443571690" observedRunningTime="2026-02-19 08:39:31.091281944 +0000 UTC m=+1113.834939423" watchObservedRunningTime="2026-02-19 08:39:31.098444321 +0000 UTC m=+1113.842101790" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.360272 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.361836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.364841 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.375731 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.456986 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.457032 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.457105 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.457240 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.457259 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.457287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq24z\" (UniqueName: \"kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq24z\" (UniqueName: \"kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558560 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558599 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558668 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.558747 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.559553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.559642 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.559742 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.559770 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.560711 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.576724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq24z\" (UniqueName: \"kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z\") pod \"dnsmasq-dns-768666cd57-5pqkz\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:31 crc kubenswrapper[4780]: I0219 08:39:31.682889 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.171617 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.320315 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371188 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371230 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldbjh\" (UniqueName: \"kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371248 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run" (OuterVolumeSpecName: "var-run") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371310 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371397 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts\") pod \"c8d5a12d-1914-4247-9979-75124c4aef2a\" (UID: \"c8d5a12d-1914-4247-9979-75124c4aef2a\") " Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371666 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371676 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.371909 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.372785 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.372911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts" (OuterVolumeSpecName: "scripts") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.376227 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh" (OuterVolumeSpecName: "kube-api-access-ldbjh") pod "c8d5a12d-1914-4247-9979-75124c4aef2a" (UID: "c8d5a12d-1914-4247-9979-75124c4aef2a"). InnerVolumeSpecName "kube-api-access-ldbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.473500 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.473550 4780 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5a12d-1914-4247-9979-75124c4aef2a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.473567 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldbjh\" (UniqueName: \"kubernetes.io/projected/c8d5a12d-1914-4247-9979-75124c4aef2a-kube-api-access-ldbjh\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.473583 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8d5a12d-1914-4247-9979-75124c4aef2a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:32 crc kubenswrapper[4780]: I0219 08:39:32.982466 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nj9cs" Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.026143 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs-config-2z5h6" Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.026188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs-config-2z5h6" event={"ID":"c8d5a12d-1914-4247-9979-75124c4aef2a","Type":"ContainerDied","Data":"056526ae1975c3b67a4e4aa0a0e89c21d945ec7866901ba2ee22256bd74f9e0b"} Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.026527 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056526ae1975c3b67a4e4aa0a0e89c21d945ec7866901ba2ee22256bd74f9e0b" Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.027396 4780 generic.go:334] "Generic (PLEG): container finished" podID="ce4b4637-869c-436d-8484-49d337a8d25e" containerID="ed2974898c14fb08f775f7280877280d9db04d7df939142730620efb397d9e52" exitCode=0 Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.027434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" event={"ID":"ce4b4637-869c-436d-8484-49d337a8d25e","Type":"ContainerDied","Data":"ed2974898c14fb08f775f7280877280d9db04d7df939142730620efb397d9e52"} Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.027460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" event={"ID":"ce4b4637-869c-436d-8484-49d337a8d25e","Type":"ContainerStarted","Data":"27efe8924167d90d08144557ea44413433d7241ca72d798de508991f9d09717b"} Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.433813 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nj9cs-config-2z5h6"] Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.440436 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nj9cs-config-2z5h6"] Feb 19 08:39:33 crc kubenswrapper[4780]: I0219 08:39:33.949890 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d5a12d-1914-4247-9979-75124c4aef2a" path="/var/lib/kubelet/pods/c8d5a12d-1914-4247-9979-75124c4aef2a/volumes" Feb 19 08:39:34 crc kubenswrapper[4780]: I0219 08:39:34.036358 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" event={"ID":"ce4b4637-869c-436d-8484-49d337a8d25e","Type":"ContainerStarted","Data":"5724bf2e363c5bea6d4213bb4848e4bc8e9c90d59a0d7df8d0d10db9c775ab43"} Feb 19 08:39:34 crc kubenswrapper[4780]: I0219 08:39:34.036506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:34 crc kubenswrapper[4780]: I0219 08:39:34.081427 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" podStartSLOduration=3.081405756 podStartE2EDuration="3.081405756s" podCreationTimestamp="2026-02-19 08:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:34.07710229 +0000 UTC m=+1116.820759749" watchObservedRunningTime="2026-02-19 08:39:34.081405756 +0000 UTC m=+1116.825063205" Feb 19 08:39:36 crc kubenswrapper[4780]: I0219 08:39:36.336467 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:39:36 crc kubenswrapper[4780]: I0219 08:39:36.336746 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:39:37 crc kubenswrapper[4780]: I0219 08:39:37.069277 4780 generic.go:334] "Generic (PLEG): container finished" podID="8049af4a-37d2-4155-924d-74ddba48cde8" containerID="1e831d18d35ae8f44d6b8a238c4cab50690385fc043885c5021b32e17ebeb213" exitCode=0 Feb 19 08:39:37 crc kubenswrapper[4780]: I0219 08:39:37.069356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rm7g2" event={"ID":"8049af4a-37d2-4155-924d-74ddba48cde8","Type":"ContainerDied","Data":"1e831d18d35ae8f44d6b8a238c4cab50690385fc043885c5021b32e17ebeb213"} Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.540532 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.581728 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kcrd\" (UniqueName: \"kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd\") pod \"8049af4a-37d2-4155-924d-74ddba48cde8\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.581796 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data\") pod \"8049af4a-37d2-4155-924d-74ddba48cde8\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.581887 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle\") pod \"8049af4a-37d2-4155-924d-74ddba48cde8\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.581990 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data\") pod \"8049af4a-37d2-4155-924d-74ddba48cde8\" (UID: \"8049af4a-37d2-4155-924d-74ddba48cde8\") " Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.591717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8049af4a-37d2-4155-924d-74ddba48cde8" (UID: "8049af4a-37d2-4155-924d-74ddba48cde8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.591717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd" (OuterVolumeSpecName: "kube-api-access-7kcrd") pod "8049af4a-37d2-4155-924d-74ddba48cde8" (UID: "8049af4a-37d2-4155-924d-74ddba48cde8"). InnerVolumeSpecName "kube-api-access-7kcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.608768 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8049af4a-37d2-4155-924d-74ddba48cde8" (UID: "8049af4a-37d2-4155-924d-74ddba48cde8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.629264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data" (OuterVolumeSpecName: "config-data") pod "8049af4a-37d2-4155-924d-74ddba48cde8" (UID: "8049af4a-37d2-4155-924d-74ddba48cde8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.683493 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.683524 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kcrd\" (UniqueName: \"kubernetes.io/projected/8049af4a-37d2-4155-924d-74ddba48cde8-kube-api-access-7kcrd\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.683536 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:38 crc kubenswrapper[4780]: I0219 08:39:38.683544 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8049af4a-37d2-4155-924d-74ddba48cde8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.088731 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rm7g2" event={"ID":"8049af4a-37d2-4155-924d-74ddba48cde8","Type":"ContainerDied","Data":"f2b7f9e4563ed62692923437af75bf701ec2f8458e270e5c04023c292ab2997f"} Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.088793 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b7f9e4563ed62692923437af75bf701ec2f8458e270e5c04023c292ab2997f" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.088821 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rm7g2" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.479964 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.480414 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="dnsmasq-dns" containerID="cri-o://5724bf2e363c5bea6d4213bb4848e4bc8e9c90d59a0d7df8d0d10db9c775ab43" gracePeriod=10 Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.481318 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.528940 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:39:39 crc kubenswrapper[4780]: E0219 08:39:39.529274 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8049af4a-37d2-4155-924d-74ddba48cde8" containerName="glance-db-sync" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.529289 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8049af4a-37d2-4155-924d-74ddba48cde8" containerName="glance-db-sync" Feb 19 08:39:39 crc kubenswrapper[4780]: E0219 08:39:39.529305 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d5a12d-1914-4247-9979-75124c4aef2a" containerName="ovn-config" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.529313 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d5a12d-1914-4247-9979-75124c4aef2a" containerName="ovn-config" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.529467 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8049af4a-37d2-4155-924d-74ddba48cde8" containerName="glance-db-sync" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.529482 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d5a12d-1914-4247-9979-75124c4aef2a" containerName="ovn-config" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.530295 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.556489 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.626544 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.626602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnmv\" (UniqueName: \"kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.626738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.626794 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.626894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.627074 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.728201 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.729050 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.729273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.729836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.729899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.729943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnmv\" (UniqueName: \"kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.730349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.730598 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.731340 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.731272 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.732151 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.758721 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnmv\" (UniqueName: \"kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv\") pod \"dnsmasq-dns-68677f88c9-cmlpn\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:39 crc kubenswrapper[4780]: I0219 08:39:39.875480 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:40 crc kubenswrapper[4780]: I0219 08:39:40.103667 4780 generic.go:334] "Generic (PLEG): container finished" podID="ce4b4637-869c-436d-8484-49d337a8d25e" containerID="5724bf2e363c5bea6d4213bb4848e4bc8e9c90d59a0d7df8d0d10db9c775ab43" exitCode=0 Feb 19 08:39:40 crc kubenswrapper[4780]: I0219 08:39:40.103746 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" event={"ID":"ce4b4637-869c-436d-8484-49d337a8d25e","Type":"ContainerDied","Data":"5724bf2e363c5bea6d4213bb4848e4bc8e9c90d59a0d7df8d0d10db9c775ab43"} Feb 19 08:39:40 crc kubenswrapper[4780]: I0219 08:39:40.334617 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:39:40 crc kubenswrapper[4780]: W0219 08:39:40.339290 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaead610_2c43_4e62_bb43_70afcade3d0b.slice/crio-e40f07fd025aeab923f6237c17000080102b3d6b0d51d581020527a594509944 WatchSource:0}: Error finding container e40f07fd025aeab923f6237c17000080102b3d6b0d51d581020527a594509944: Status 404 returned error can't find the container with id e40f07fd025aeab923f6237c17000080102b3d6b0d51d581020527a594509944 Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.119748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerStarted","Data":"0d199aa5c38e2d8cc1a3e238e3d85a2e2aff1858097f236718d37a03b20ef52c"} Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.120094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerStarted","Data":"e40f07fd025aeab923f6237c17000080102b3d6b0d51d581020527a594509944"} Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.721264 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863452 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863551 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863593 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq24z\" (UniqueName: \"kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863634 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.863664 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb\") pod \"ce4b4637-869c-436d-8484-49d337a8d25e\" (UID: \"ce4b4637-869c-436d-8484-49d337a8d25e\") " Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.867992 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z" (OuterVolumeSpecName: "kube-api-access-bq24z") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "kube-api-access-bq24z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.905030 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.906878 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.909723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config" (OuterVolumeSpecName: "config") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.910793 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.927079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce4b4637-869c-436d-8484-49d337a8d25e" (UID: "ce4b4637-869c-436d-8484-49d337a8d25e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.969963 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.970000 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.970014 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.970025 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq24z\" (UniqueName: \"kubernetes.io/projected/ce4b4637-869c-436d-8484-49d337a8d25e-kube-api-access-bq24z\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.970038 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:41 crc kubenswrapper[4780]: I0219 08:39:41.970049 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce4b4637-869c-436d-8484-49d337a8d25e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.128105 4780 generic.go:334] "Generic (PLEG): container finished" podID="baead610-2c43-4e62-bb43-70afcade3d0b" containerID="0d199aa5c38e2d8cc1a3e238e3d85a2e2aff1858097f236718d37a03b20ef52c" exitCode=0 Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.128237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerDied","Data":"0d199aa5c38e2d8cc1a3e238e3d85a2e2aff1858097f236718d37a03b20ef52c"} Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.130565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" event={"ID":"ce4b4637-869c-436d-8484-49d337a8d25e","Type":"ContainerDied","Data":"27efe8924167d90d08144557ea44413433d7241ca72d798de508991f9d09717b"} Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.130721 4780 scope.go:117] "RemoveContainer" containerID="5724bf2e363c5bea6d4213bb4848e4bc8e9c90d59a0d7df8d0d10db9c775ab43" Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.130954 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.151708 4780 scope.go:117] "RemoveContainer" containerID="ed2974898c14fb08f775f7280877280d9db04d7df939142730620efb397d9e52" Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.184306 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:42 crc kubenswrapper[4780]: I0219 08:39:42.192302 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-768666cd57-5pqkz"] Feb 19 08:39:43 crc kubenswrapper[4780]: I0219 08:39:43.143293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerStarted","Data":"0626ef430b1b230e5dc5f68b1d95f90d7488328b8997e6848ba98c563993a58e"} Feb 19 08:39:43 crc kubenswrapper[4780]: I0219 08:39:43.143557 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:43 crc kubenswrapper[4780]: I0219 08:39:43.161345 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podStartSLOduration=4.161324387 podStartE2EDuration="4.161324387s" podCreationTimestamp="2026-02-19 08:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:43.160571868 +0000 UTC m=+1125.904229327" watchObservedRunningTime="2026-02-19 08:39:43.161324387 +0000 UTC m=+1125.904981876" Feb 19 08:39:43 crc kubenswrapper[4780]: I0219 08:39:43.951059 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" path="/var/lib/kubelet/pods/ce4b4637-869c-436d-8484-49d337a8d25e/volumes" Feb 19 08:39:43 crc kubenswrapper[4780]: I0219 08:39:43.971352 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.336306 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.356227 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v7869"] Feb 19 08:39:44 crc kubenswrapper[4780]: E0219 08:39:44.356509 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="dnsmasq-dns" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.356524 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="dnsmasq-dns" Feb 19 08:39:44 crc kubenswrapper[4780]: E0219 08:39:44.356549 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="init" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.356556 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="init" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.356696 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="dnsmasq-dns" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.357181 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.369541 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7869"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.517601 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.517702 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzc8\" (UniqueName: \"kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.539313 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-th9tx"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.540300 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.565177 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-eb1b-account-create-update-bvb8p"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.566171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.568329 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.569816 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-th9tx"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.614950 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eb1b-account-create-update-bvb8p"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.619827 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.619907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7j54\" (UniqueName: \"kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.619990 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.620017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.620039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnzc8\" (UniqueName: \"kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.620056 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtgg\" (UniqueName: \"kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.620808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.664891 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnzc8\" (UniqueName: \"kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8\") pod \"cinder-db-create-v7869\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.674395 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9t6f2"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.675379 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.679789 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7869" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.689474 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sfs2h"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.690826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.697910 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.698214 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gt54x" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.698351 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.698512 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.709499 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-058b-account-create-update-hd56g"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.710788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.712881 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sfs2h"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.716762 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.721972 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.722032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7j54\" (UniqueName: \"kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.722071 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrmjc\" (UniqueName: \"kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.722118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.722160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.722182 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtgg\" (UniqueName: \"kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.723068 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.723730 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.732732 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-058b-account-create-update-hd56g"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.745545 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9t6f2"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.746202 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7j54\" (UniqueName: \"kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54\") pod \"barbican-db-create-th9tx\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.761461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtgg\" (UniqueName: \"kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg\") pod \"barbican-eb1b-account-create-update-bvb8p\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824367 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824402 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9k8\" (UniqueName: \"kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsg4\" (UniqueName: \"kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824472 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824506 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.824548 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrmjc\" (UniqueName: \"kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.825570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.842274 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f929-account-create-update-fx28c"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.843398 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.846371 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.854635 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrmjc\" (UniqueName: \"kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc\") pod \"neutron-db-create-9t6f2\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.854732 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.862144 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f929-account-create-update-fx28c"] Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.880818 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.926406 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9k8\" (UniqueName: \"kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.926934 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.930237 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsg4\" (UniqueName: \"kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.930333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.930918 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.931024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.931080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57khn\" (UniqueName: \"kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.931552 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.937204 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.943371 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.947746 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsg4\" (UniqueName: \"kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4\") pod \"cinder-058b-account-create-update-hd56g\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.951078 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9k8\" (UniqueName: \"kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8\") pod \"keystone-db-sync-sfs2h\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:44 crc kubenswrapper[4780]: I0219 08:39:44.965421 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.032656 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.032902 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57khn\" (UniqueName: \"kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.035086 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.050451 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.053266 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57khn\" (UniqueName: \"kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn\") pod \"neutron-f929-account-create-update-fx28c\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.152834 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7869"] Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.228241 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.278862 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.467194 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eb1b-account-create-update-bvb8p"] Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.482619 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-th9tx"] Feb 19 08:39:45 crc kubenswrapper[4780]: W0219 08:39:45.491662 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57bc781e_12f1_4fd2_9bcb_fa3021a33e60.slice/crio-c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2 WatchSource:0}: Error finding container c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2: Status 404 returned error can't find the container with id c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2 Feb 19 08:39:45 crc kubenswrapper[4780]: I0219 08:39:45.515961 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9t6f2"] Feb 19 08:39:45 crc kubenswrapper[4780]: W0219 08:39:45.529417 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe8532ec_1e86_48f6_8446_3ff490756edd.slice/crio-271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac WatchSource:0}: Error finding container 271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac: Status 404 returned error can't find the container with id 271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:45.606448 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sfs2h"] Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:45.641897 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-058b-account-create-update-hd56g"] Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.182219 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-bvb8p" event={"ID":"550d254b-03a9-46ed-bd17-84aa2b8a690f","Type":"ContainerStarted","Data":"babcffd09ac04f4a740adc86c8d0876c0829bc381dfdb0ee7a062a05f60aa1f7"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.182558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-bvb8p" event={"ID":"550d254b-03a9-46ed-bd17-84aa2b8a690f","Type":"ContainerStarted","Data":"5b0b1657b8b2caf3cbd4a6e3349edb015edeed3a752029075ef98ee60d2f1686"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.185239 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe8532ec-1e86-48f6-8446-3ff490756edd" containerID="553f577ac51e9372da372faf811a0c3edeee048a97a94209d758afdc77103541" exitCode=0 Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.185384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t6f2" event={"ID":"fe8532ec-1e86-48f6-8446-3ff490756edd","Type":"ContainerDied","Data":"553f577ac51e9372da372faf811a0c3edeee048a97a94209d758afdc77103541"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.185429 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t6f2" event={"ID":"fe8532ec-1e86-48f6-8446-3ff490756edd","Type":"ContainerStarted","Data":"271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.192179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sfs2h" event={"ID":"b4821312-0274-4930-bd0a-d6438b1e3e56","Type":"ContainerStarted","Data":"0a572448ff3d85aa1c7c0f402790d5a320514f3027465c8be19d1a3bd38cc969"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.197438 4780 generic.go:334] "Generic (PLEG): container finished" podID="57bc781e-12f1-4fd2-9bcb-fa3021a33e60" containerID="68d8d2a238df0c56bfedd746eb03803f3b0221f54d574b5db9d92b68a77195e0" exitCode=0 Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.197507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-th9tx" event={"ID":"57bc781e-12f1-4fd2-9bcb-fa3021a33e60","Type":"ContainerDied","Data":"68d8d2a238df0c56bfedd746eb03803f3b0221f54d574b5db9d92b68a77195e0"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.197539 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-th9tx" event={"ID":"57bc781e-12f1-4fd2-9bcb-fa3021a33e60","Type":"ContainerStarted","Data":"c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.199497 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-hd56g" event={"ID":"605584b0-916f-400c-a371-801f3eb3daa4","Type":"ContainerStarted","Data":"6393f12f3cb4c5e2df5ae49ff25ea66f3e1da538d8a43eb9dcee422b85672c7c"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.199516 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-hd56g" event={"ID":"605584b0-916f-400c-a371-801f3eb3daa4","Type":"ContainerStarted","Data":"0aa43c9226f1c70f4e80c62a79aa46bca4fb438fa62df620a29918728ca79db2"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.204620 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-eb1b-account-create-update-bvb8p" podStartSLOduration=2.204604651 podStartE2EDuration="2.204604651s" podCreationTimestamp="2026-02-19 08:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:46.203963025 +0000 UTC m=+1128.947620484" watchObservedRunningTime="2026-02-19 08:39:46.204604651 +0000 UTC m=+1128.948262100" Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.210424 4780 generic.go:334] "Generic (PLEG): container finished" podID="4825dc1d-ef7f-4ab2-a873-1072afe8e515" containerID="43f3807845d6b9d2b1024f020aa420e73a2feee64ccc631c39ac44b44af591b4" exitCode=0 Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.210467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7869" event={"ID":"4825dc1d-ef7f-4ab2-a873-1072afe8e515","Type":"ContainerDied","Data":"43f3807845d6b9d2b1024f020aa420e73a2feee64ccc631c39ac44b44af591b4"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.210493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7869" event={"ID":"4825dc1d-ef7f-4ab2-a873-1072afe8e515","Type":"ContainerStarted","Data":"c33975035650e037ad9915c49ee68df54b175bff2bc1e5cc840fb620ad8b9e15"} Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.249014 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-058b-account-create-update-hd56g" podStartSLOduration=2.248994468 podStartE2EDuration="2.248994468s" podCreationTimestamp="2026-02-19 08:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:39:46.241598835 +0000 UTC m=+1128.985256284" watchObservedRunningTime="2026-02-19 08:39:46.248994468 +0000 UTC m=+1128.992651907" Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.620078 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f929-account-create-update-fx28c"] Feb 19 08:39:46 crc kubenswrapper[4780]: I0219 08:39:46.684484 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-768666cd57-5pqkz" podUID="ce4b4637-869c-436d-8484-49d337a8d25e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 19 08:39:46 crc kubenswrapper[4780]: W0219 08:39:46.693581 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1c75bcc_eb7e_442e_ae68_288c9c525e73.slice/crio-6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a WatchSource:0}: Error finding container 6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a: Status 404 returned error can't find the container with id 6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.217861 4780 generic.go:334] "Generic (PLEG): container finished" podID="605584b0-916f-400c-a371-801f3eb3daa4" containerID="6393f12f3cb4c5e2df5ae49ff25ea66f3e1da538d8a43eb9dcee422b85672c7c" exitCode=0 Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.218215 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-hd56g" event={"ID":"605584b0-916f-400c-a371-801f3eb3daa4","Type":"ContainerDied","Data":"6393f12f3cb4c5e2df5ae49ff25ea66f3e1da538d8a43eb9dcee422b85672c7c"} Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.219424 4780 generic.go:334] "Generic (PLEG): container finished" podID="550d254b-03a9-46ed-bd17-84aa2b8a690f" containerID="babcffd09ac04f4a740adc86c8d0876c0829bc381dfdb0ee7a062a05f60aa1f7" exitCode=0 Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.219503 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-bvb8p" event={"ID":"550d254b-03a9-46ed-bd17-84aa2b8a690f","Type":"ContainerDied","Data":"babcffd09ac04f4a740adc86c8d0876c0829bc381dfdb0ee7a062a05f60aa1f7"} Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.221854 4780 generic.go:334] "Generic (PLEG): container finished" podID="e1c75bcc-eb7e-442e-ae68-288c9c525e73" containerID="eff7393185561c27413a75db83884015c05cba991a750ef914d6173d4cfdb168" exitCode=0 Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.221958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f929-account-create-update-fx28c" event={"ID":"e1c75bcc-eb7e-442e-ae68-288c9c525e73","Type":"ContainerDied","Data":"eff7393185561c27413a75db83884015c05cba991a750ef914d6173d4cfdb168"} Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.222005 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f929-account-create-update-fx28c" event={"ID":"e1c75bcc-eb7e-442e-ae68-288c9c525e73","Type":"ContainerStarted","Data":"6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a"} Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.610599 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.664658 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.685023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7j54\" (UniqueName: \"kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54\") pod \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.685190 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts\") pod \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\" (UID: \"57bc781e-12f1-4fd2-9bcb-fa3021a33e60\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.686459 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57bc781e-12f1-4fd2-9bcb-fa3021a33e60" (UID: "57bc781e-12f1-4fd2-9bcb-fa3021a33e60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.697405 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54" (OuterVolumeSpecName: "kube-api-access-p7j54") pod "57bc781e-12f1-4fd2-9bcb-fa3021a33e60" (UID: "57bc781e-12f1-4fd2-9bcb-fa3021a33e60"). InnerVolumeSpecName "kube-api-access-p7j54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.750469 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7869" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.787691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnzc8\" (UniqueName: \"kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8\") pod \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.787787 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts\") pod \"fe8532ec-1e86-48f6-8446-3ff490756edd\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.787843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrmjc\" (UniqueName: \"kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc\") pod \"fe8532ec-1e86-48f6-8446-3ff490756edd\" (UID: \"fe8532ec-1e86-48f6-8446-3ff490756edd\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.787905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts\") pod \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\" (UID: \"4825dc1d-ef7f-4ab2-a873-1072afe8e515\") " Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.788254 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.788271 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7j54\" (UniqueName: \"kubernetes.io/projected/57bc781e-12f1-4fd2-9bcb-fa3021a33e60-kube-api-access-p7j54\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.788607 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe8532ec-1e86-48f6-8446-3ff490756edd" (UID: "fe8532ec-1e86-48f6-8446-3ff490756edd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.788720 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4825dc1d-ef7f-4ab2-a873-1072afe8e515" (UID: "4825dc1d-ef7f-4ab2-a873-1072afe8e515"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.792674 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8" (OuterVolumeSpecName: "kube-api-access-qnzc8") pod "4825dc1d-ef7f-4ab2-a873-1072afe8e515" (UID: "4825dc1d-ef7f-4ab2-a873-1072afe8e515"). InnerVolumeSpecName "kube-api-access-qnzc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.792782 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc" (OuterVolumeSpecName: "kube-api-access-hrmjc") pod "fe8532ec-1e86-48f6-8446-3ff490756edd" (UID: "fe8532ec-1e86-48f6-8446-3ff490756edd"). InnerVolumeSpecName "kube-api-access-hrmjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.890005 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnzc8\" (UniqueName: \"kubernetes.io/projected/4825dc1d-ef7f-4ab2-a873-1072afe8e515-kube-api-access-qnzc8\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.890050 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe8532ec-1e86-48f6-8446-3ff490756edd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.890060 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrmjc\" (UniqueName: \"kubernetes.io/projected/fe8532ec-1e86-48f6-8446-3ff490756edd-kube-api-access-hrmjc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:47 crc kubenswrapper[4780]: I0219 08:39:47.890072 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4825dc1d-ef7f-4ab2-a873-1072afe8e515-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.252229 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t6f2" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.252559 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t6f2" event={"ID":"fe8532ec-1e86-48f6-8446-3ff490756edd","Type":"ContainerDied","Data":"271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac"} Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.252666 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271f08ad26fe8ddd38f9d96ef2c120ff75a275b7feebae275ada6bfed5f822ac" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.256875 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-th9tx" event={"ID":"57bc781e-12f1-4fd2-9bcb-fa3021a33e60","Type":"ContainerDied","Data":"c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2"} Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.256930 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b4b1a49ae3b685701c6bc8934e01fb1517ba5ff9e670c1b1c51e07dc6387b2" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.256924 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-th9tx" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.260734 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7869" event={"ID":"4825dc1d-ef7f-4ab2-a873-1072afe8e515","Type":"ContainerDied","Data":"c33975035650e037ad9915c49ee68df54b175bff2bc1e5cc840fb620ad8b9e15"} Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.260773 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33975035650e037ad9915c49ee68df54b175bff2bc1e5cc840fb620ad8b9e15" Feb 19 08:39:48 crc kubenswrapper[4780]: I0219 08:39:48.260912 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7869" Feb 19 08:39:49 crc kubenswrapper[4780]: I0219 08:39:49.877361 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:39:49 crc kubenswrapper[4780]: I0219 08:39:49.974262 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:39:49 crc kubenswrapper[4780]: I0219 08:39:49.974473 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="dnsmasq-dns" containerID="cri-o://08ba14a622f38f53deafda19fd51a8bc3be19364ea4ae11205cda0f38702178d" gracePeriod=10 Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.281471 4780 generic.go:334] "Generic (PLEG): container finished" podID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerID="08ba14a622f38f53deafda19fd51a8bc3be19364ea4ae11205cda0f38702178d" exitCode=0 Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.281517 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" event={"ID":"55561d8e-be0c-4936-895f-f31d1654cb8f","Type":"ContainerDied","Data":"08ba14a622f38f53deafda19fd51a8bc3be19364ea4ae11205cda0f38702178d"} Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.590422 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.599167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.625794 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.640934 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts\") pod \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.641095 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxsg4\" (UniqueName: \"kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4\") pod \"605584b0-916f-400c-a371-801f3eb3daa4\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.641141 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts\") pod \"605584b0-916f-400c-a371-801f3eb3daa4\" (UID: \"605584b0-916f-400c-a371-801f3eb3daa4\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.641210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57khn\" (UniqueName: \"kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn\") pod \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\" (UID: \"e1c75bcc-eb7e-442e-ae68-288c9c525e73\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.641697 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1c75bcc-eb7e-442e-ae68-288c9c525e73" (UID: "e1c75bcc-eb7e-442e-ae68-288c9c525e73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.645785 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.647895 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn" (OuterVolumeSpecName: "kube-api-access-57khn") pod "e1c75bcc-eb7e-442e-ae68-288c9c525e73" (UID: "e1c75bcc-eb7e-442e-ae68-288c9c525e73"). InnerVolumeSpecName "kube-api-access-57khn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.649511 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4" (OuterVolumeSpecName: "kube-api-access-hxsg4") pod "605584b0-916f-400c-a371-801f3eb3daa4" (UID: "605584b0-916f-400c-a371-801f3eb3daa4"). InnerVolumeSpecName "kube-api-access-hxsg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.649739 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "605584b0-916f-400c-a371-801f3eb3daa4" (UID: "605584b0-916f-400c-a371-801f3eb3daa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.742724 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb\") pod \"55561d8e-be0c-4936-895f-f31d1654cb8f\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.742803 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtgg\" (UniqueName: \"kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg\") pod \"550d254b-03a9-46ed-bd17-84aa2b8a690f\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.742890 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb\") pod \"55561d8e-be0c-4936-895f-f31d1654cb8f\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.742920 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc\") pod \"55561d8e-be0c-4936-895f-f31d1654cb8f\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.742985 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config\") pod \"55561d8e-be0c-4936-895f-f31d1654cb8f\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743042 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj28t\" (UniqueName: \"kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t\") pod \"55561d8e-be0c-4936-895f-f31d1654cb8f\" (UID: \"55561d8e-be0c-4936-895f-f31d1654cb8f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743112 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts\") pod \"550d254b-03a9-46ed-bd17-84aa2b8a690f\" (UID: \"550d254b-03a9-46ed-bd17-84aa2b8a690f\") " Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743674 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "550d254b-03a9-46ed-bd17-84aa2b8a690f" (UID: "550d254b-03a9-46ed-bd17-84aa2b8a690f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743858 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxsg4\" (UniqueName: \"kubernetes.io/projected/605584b0-916f-400c-a371-801f3eb3daa4-kube-api-access-hxsg4\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743878 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/605584b0-916f-400c-a371-801f3eb3daa4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743909 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57khn\" (UniqueName: \"kubernetes.io/projected/e1c75bcc-eb7e-442e-ae68-288c9c525e73-kube-api-access-57khn\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743921 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d254b-03a9-46ed-bd17-84aa2b8a690f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.743931 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1c75bcc-eb7e-442e-ae68-288c9c525e73-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.746314 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg" (OuterVolumeSpecName: "kube-api-access-cqtgg") pod "550d254b-03a9-46ed-bd17-84aa2b8a690f" (UID: "550d254b-03a9-46ed-bd17-84aa2b8a690f"). InnerVolumeSpecName "kube-api-access-cqtgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.747733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t" (OuterVolumeSpecName: "kube-api-access-sj28t") pod "55561d8e-be0c-4936-895f-f31d1654cb8f" (UID: "55561d8e-be0c-4936-895f-f31d1654cb8f"). InnerVolumeSpecName "kube-api-access-sj28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.779076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config" (OuterVolumeSpecName: "config") pod "55561d8e-be0c-4936-895f-f31d1654cb8f" (UID: "55561d8e-be0c-4936-895f-f31d1654cb8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.782539 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55561d8e-be0c-4936-895f-f31d1654cb8f" (UID: "55561d8e-be0c-4936-895f-f31d1654cb8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.786104 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55561d8e-be0c-4936-895f-f31d1654cb8f" (UID: "55561d8e-be0c-4936-895f-f31d1654cb8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.792391 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55561d8e-be0c-4936-895f-f31d1654cb8f" (UID: "55561d8e-be0c-4936-895f-f31d1654cb8f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844769 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844803 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtgg\" (UniqueName: \"kubernetes.io/projected/550d254b-03a9-46ed-bd17-84aa2b8a690f-kube-api-access-cqtgg\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844818 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844831 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844843 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55561d8e-be0c-4936-895f-f31d1654cb8f-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:50 crc kubenswrapper[4780]: I0219 08:39:50.844854 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj28t\" (UniqueName: \"kubernetes.io/projected/55561d8e-be0c-4936-895f-f31d1654cb8f-kube-api-access-sj28t\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.293082 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.293411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-vdsk7" event={"ID":"55561d8e-be0c-4936-895f-f31d1654cb8f","Type":"ContainerDied","Data":"1ea8e289050f0fdc2b641d3a6f2f131d02d576d95a5f24f3ed71d456653de405"} Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.293469 4780 scope.go:117] "RemoveContainer" containerID="08ba14a622f38f53deafda19fd51a8bc3be19364ea4ae11205cda0f38702178d" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.295696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-hd56g" event={"ID":"605584b0-916f-400c-a371-801f3eb3daa4","Type":"ContainerDied","Data":"0aa43c9226f1c70f4e80c62a79aa46bca4fb438fa62df620a29918728ca79db2"} Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.295722 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa43c9226f1c70f4e80c62a79aa46bca4fb438fa62df620a29918728ca79db2" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.295726 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-hd56g" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.298031 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-bvb8p" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.298138 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-bvb8p" event={"ID":"550d254b-03a9-46ed-bd17-84aa2b8a690f","Type":"ContainerDied","Data":"5b0b1657b8b2caf3cbd4a6e3349edb015edeed3a752029075ef98ee60d2f1686"} Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.298227 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0b1657b8b2caf3cbd4a6e3349edb015edeed3a752029075ef98ee60d2f1686" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.300841 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f929-account-create-update-fx28c" event={"ID":"e1c75bcc-eb7e-442e-ae68-288c9c525e73","Type":"ContainerDied","Data":"6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a"} Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.300882 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6730f10bd94d7161d8ce372dfcc7377ebcc4136083258cb118949d2ea12e364a" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.300861 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-fx28c" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.302729 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sfs2h" event={"ID":"b4821312-0274-4930-bd0a-d6438b1e3e56","Type":"ContainerStarted","Data":"ce68a2104a99b076a82e6dfc356074c88a457b853c24a0d53b84bead36b5e461"} Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.325679 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sfs2h" podStartSLOduration=2.525408888 podStartE2EDuration="7.325655149s" podCreationTimestamp="2026-02-19 08:39:44 +0000 UTC" firstStartedPulling="2026-02-19 08:39:45.635228591 +0000 UTC m=+1128.378886040" lastFinishedPulling="2026-02-19 08:39:50.435474852 +0000 UTC m=+1133.179132301" observedRunningTime="2026-02-19 08:39:51.324229034 +0000 UTC m=+1134.067886483" watchObservedRunningTime="2026-02-19 08:39:51.325655149 +0000 UTC m=+1134.069312608" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.341914 4780 scope.go:117] "RemoveContainer" containerID="1fa75cebf640685724bc728308305bd8bdc9c2b2c79d78320bb500c96f439103" Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.374250 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.388556 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-vdsk7"] Feb 19 08:39:51 crc kubenswrapper[4780]: I0219 08:39:51.948936 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" path="/var/lib/kubelet/pods/55561d8e-be0c-4936-895f-f31d1654cb8f/volumes" Feb 19 08:39:56 crc kubenswrapper[4780]: I0219 08:39:56.354495 4780 generic.go:334] "Generic (PLEG): container finished" podID="b4821312-0274-4930-bd0a-d6438b1e3e56" containerID="ce68a2104a99b076a82e6dfc356074c88a457b853c24a0d53b84bead36b5e461" exitCode=0 Feb 19 08:39:56 crc kubenswrapper[4780]: I0219 08:39:56.354603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sfs2h" event={"ID":"b4821312-0274-4930-bd0a-d6438b1e3e56","Type":"ContainerDied","Data":"ce68a2104a99b076a82e6dfc356074c88a457b853c24a0d53b84bead36b5e461"} Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.779523 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.856002 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx9k8\" (UniqueName: \"kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8\") pod \"b4821312-0274-4930-bd0a-d6438b1e3e56\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.856068 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data\") pod \"b4821312-0274-4930-bd0a-d6438b1e3e56\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.856155 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle\") pod \"b4821312-0274-4930-bd0a-d6438b1e3e56\" (UID: \"b4821312-0274-4930-bd0a-d6438b1e3e56\") " Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.861247 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8" (OuterVolumeSpecName: "kube-api-access-zx9k8") pod "b4821312-0274-4930-bd0a-d6438b1e3e56" (UID: "b4821312-0274-4930-bd0a-d6438b1e3e56"). InnerVolumeSpecName "kube-api-access-zx9k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.882301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4821312-0274-4930-bd0a-d6438b1e3e56" (UID: "b4821312-0274-4930-bd0a-d6438b1e3e56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.900622 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data" (OuterVolumeSpecName: "config-data") pod "b4821312-0274-4930-bd0a-d6438b1e3e56" (UID: "b4821312-0274-4930-bd0a-d6438b1e3e56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.959020 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx9k8\" (UniqueName: \"kubernetes.io/projected/b4821312-0274-4930-bd0a-d6438b1e3e56-kube-api-access-zx9k8\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.959060 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:57 crc kubenswrapper[4780]: I0219 08:39:57.959080 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4821312-0274-4930-bd0a-d6438b1e3e56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.380344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sfs2h" event={"ID":"b4821312-0274-4930-bd0a-d6438b1e3e56","Type":"ContainerDied","Data":"0a572448ff3d85aa1c7c0f402790d5a320514f3027465c8be19d1a3bd38cc969"} Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.380405 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a572448ff3d85aa1c7c0f402790d5a320514f3027465c8be19d1a3bd38cc969" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.380484 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sfs2h" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.710558 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l7zgw"] Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711162 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8532ec-1e86-48f6-8446-3ff490756edd" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711178 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8532ec-1e86-48f6-8446-3ff490756edd" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711196 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d254b-03a9-46ed-bd17-84aa2b8a690f" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711201 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d254b-03a9-46ed-bd17-84aa2b8a690f" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711209 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bc781e-12f1-4fd2-9bcb-fa3021a33e60" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711215 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bc781e-12f1-4fd2-9bcb-fa3021a33e60" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711225 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605584b0-916f-400c-a371-801f3eb3daa4" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711231 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="605584b0-916f-400c-a371-801f3eb3daa4" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711245 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="init" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711250 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="init" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711267 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c75bcc-eb7e-442e-ae68-288c9c525e73" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711273 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c75bcc-eb7e-442e-ae68-288c9c525e73" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711281 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4821312-0274-4930-bd0a-d6438b1e3e56" containerName="keystone-db-sync" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711286 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4821312-0274-4930-bd0a-d6438b1e3e56" containerName="keystone-db-sync" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711296 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4825dc1d-ef7f-4ab2-a873-1072afe8e515" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711302 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4825dc1d-ef7f-4ab2-a873-1072afe8e515" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: E0219 08:39:58.711310 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="dnsmasq-dns" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711315 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="dnsmasq-dns" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711458 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="605584b0-916f-400c-a371-801f3eb3daa4" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711471 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4825dc1d-ef7f-4ab2-a873-1072afe8e515" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711480 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c75bcc-eb7e-442e-ae68-288c9c525e73" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711491 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="55561d8e-be0c-4936-895f-f31d1654cb8f" containerName="dnsmasq-dns" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711499 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="550d254b-03a9-46ed-bd17-84aa2b8a690f" containerName="mariadb-account-create-update" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711508 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8532ec-1e86-48f6-8446-3ff490756edd" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711516 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4821312-0274-4930-bd0a-d6438b1e3e56" containerName="keystone-db-sync" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.711524 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bc781e-12f1-4fd2-9bcb-fa3021a33e60" containerName="mariadb-database-create" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.712029 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.716973 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.717050 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.716975 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gt54x" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.717204 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.717265 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.741488 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l7zgw"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.769163 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.771980 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780092 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780261 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780316 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66lj\" (UniqueName: \"kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.780390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.826201 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.885969 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886024 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886072 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886146 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886167 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886187 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhqfl\" (UniqueName: \"kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66lj\" (UniqueName: \"kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.886278 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.897772 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.898175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.902760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.912655 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.922763 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66lj\" (UniqueName: \"kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.922833 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vf6dk"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.923189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data\") pod \"keystone-bootstrap-l7zgw\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.923949 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.929657 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.929973 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ldrkc" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.930086 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.944495 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vf6dk"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.961148 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.963043 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.973247 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.973517 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.977229 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988266 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988329 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjzp\" (UniqueName: \"kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsnn\" (UniqueName: \"kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988510 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988629 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988813 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988849 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhqfl\" (UniqueName: \"kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.988899 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.994063 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.995008 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.995156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.995696 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:58 crc kubenswrapper[4780]: I0219 08:39:58.998093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.022958 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xdwdm"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.044811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xdwdm"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.044902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.051237 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.051436 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.051541 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fl67x" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.054561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhqfl\" (UniqueName: \"kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl\") pod \"dnsmasq-dns-7d67cdfc8f-f67zh\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.077568 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090072 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090132 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090182 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090268 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090298 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvj5h\" (UniqueName: \"kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjzp\" (UniqueName: \"kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090358 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090408 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090429 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsnn\" (UniqueName: \"kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.090471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.091191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.091630 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.094448 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.103758 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4s9pd"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.107906 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.108358 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.112230 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.113166 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.113292 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6nkds" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.113523 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.113749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.123912 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.131041 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.141796 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsnn\" (UniqueName: \"kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn\") pod \"ceilometer-0\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.147573 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4s9pd"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.148837 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjzp\" (UniqueName: \"kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp\") pod \"neutron-db-sync-vf6dk\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.189193 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191672 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191727 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29p46\" (UniqueName: \"kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191781 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191796 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvj5h\" (UniqueName: \"kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.191846 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.192797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.201233 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.205089 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.206089 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.214306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.219366 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zjnch"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.220709 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.222904 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.223213 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.223383 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7h4vj" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.231820 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zjnch"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.241084 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvj5h\" (UniqueName: \"kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h\") pod \"cinder-db-sync-xdwdm\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.258003 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.260274 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.275804 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.300433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.300621 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.300874 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.301007 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.301052 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.301156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbp2f\" (UniqueName: \"kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.301203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.301254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29p46\" (UniqueName: \"kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.302778 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303001 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303056 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303094 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303354 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87znl\" (UniqueName: \"kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.303578 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.316362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.319157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29p46\" (UniqueName: \"kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.323061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data\") pod \"barbican-db-sync-4s9pd\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.336313 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408512 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408582 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408668 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbp2f\" (UniqueName: \"kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408748 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408775 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408796 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.408869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87znl\" (UniqueName: \"kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.409671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.410592 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.411295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.411776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.412296 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.412780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.415878 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.418187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.424232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.426195 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87znl\" (UniqueName: \"kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl\") pod \"dnsmasq-dns-67dccc895-k8fms\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.428947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbp2f\" (UniqueName: \"kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f\") pod \"placement-db-sync-zjnch\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.493994 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.514721 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.568445 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zjnch" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.599751 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.741646 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l7zgw"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.857952 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.859768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.862893 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.868407 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xmvzm" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.868648 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.868802 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.895486 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.924747 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.924785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.924821 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4kh\" (UniqueName: \"kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.924869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.925088 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.925165 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.925199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.925215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.972241 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.973798 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.973898 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.980489 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.980533 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.982965 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:39:59 crc kubenswrapper[4780]: I0219 08:39:59.992671 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vf6dk"] Feb 19 08:40:00 crc kubenswrapper[4780]: W0219 08:40:00.023116 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b200855_daef_430e_8967_62c2e51acc86.slice/crio-783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3 WatchSource:0}: Error finding container 783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3: Status 404 returned error can't find the container with id 783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3 Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.026983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027193 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027259 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4kh\" (UniqueName: \"kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027359 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.027445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.028005 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.028100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.028685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.041096 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.042379 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.047394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.050004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4kh\" (UniqueName: \"kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.055117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.066574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbq5\" (UniqueName: \"kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129755 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.129852 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.130066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.130114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.193842 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.204610 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.214972 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xdwdm"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231311 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231501 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhbq5\" (UniqueName: \"kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231573 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.231600 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.232173 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.237498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.237710 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.243341 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.243612 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.244995 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.250036 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.266539 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhbq5\" (UniqueName: \"kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.316076 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.345339 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4s9pd"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.390228 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.397811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zjnch"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.415576 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerStarted","Data":"eff2072c157ef0a40149bcadd00f6bbfafc45f8606b1433452515e561d5c927d"} Feb 19 08:40:00 crc kubenswrapper[4780]: W0219 08:40:00.420682 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0f0d73a_fdfa_471b_92d4_4433cff2bda8.slice/crio-00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc WatchSource:0}: Error finding container 00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc: Status 404 returned error can't find the container with id 00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.420974 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf6dk" event={"ID":"0b200855-daef-430e-8967-62c2e51acc86","Type":"ContainerStarted","Data":"783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3"} Feb 19 08:40:00 crc kubenswrapper[4780]: W0219 08:40:00.426652 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc93d79e3_0727_4aef_b65d_98d315a9957b.slice/crio-a4b9410d9beaf4bbb7ecea5bca5cd07bc67fa1b51615681f16c3a70dac27f5f7 WatchSource:0}: Error finding container a4b9410d9beaf4bbb7ecea5bca5cd07bc67fa1b51615681f16c3a70dac27f5f7: Status 404 returned error can't find the container with id a4b9410d9beaf4bbb7ecea5bca5cd07bc67fa1b51615681f16c3a70dac27f5f7 Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.427913 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" event={"ID":"ac390d1a-0d42-4297-9b1d-c415c14d2858","Type":"ContainerStarted","Data":"f59b15aa7478fe1c2e106f685b73930fd6f5e0c3bb7270a3bc546d81bee0a717"} Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.428063 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" podUID="ac390d1a-0d42-4297-9b1d-c415c14d2858" containerName="init" containerID="cri-o://c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f" gracePeriod=10 Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.433425 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xdwdm" event={"ID":"937a02e9-aead-48c0-9c00-28a327719c18","Type":"ContainerStarted","Data":"6b194eb5875ac17b8746fa575f9d930d3fea500ae118f8a9f778baef8e15f7ae"} Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.438193 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7zgw" event={"ID":"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d","Type":"ContainerStarted","Data":"463f7a315372d742540d3e23bc19cec621947de1c1a74823c6bc0ba15fab441d"} Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.477511 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vf6dk" podStartSLOduration=2.477496715 podStartE2EDuration="2.477496715s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:00.47730941 +0000 UTC m=+1143.220966859" watchObservedRunningTime="2026-02-19 08:40:00.477496715 +0000 UTC m=+1143.221154164" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.521746 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.856635 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.886815 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950555 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950760 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhqfl\" (UniqueName: \"kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950920 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.950967 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0\") pod \"ac390d1a-0d42-4297-9b1d-c415c14d2858\" (UID: \"ac390d1a-0d42-4297-9b1d-c415c14d2858\") " Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.973341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl" (OuterVolumeSpecName: "kube-api-access-qhqfl") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "kube-api-access-qhqfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.992089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.993657 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config" (OuterVolumeSpecName: "config") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:00 crc kubenswrapper[4780]: I0219 08:40:00.997873 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.001287 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.005416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac390d1a-0d42-4297-9b1d-c415c14d2858" (UID: "ac390d1a-0d42-4297-9b1d-c415c14d2858"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053248 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053280 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053292 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053300 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053309 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhqfl\" (UniqueName: \"kubernetes.io/projected/ac390d1a-0d42-4297-9b1d-c415c14d2858-kube-api-access-qhqfl\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.053319 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac390d1a-0d42-4297-9b1d-c415c14d2858-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.193060 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.258562 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.324018 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.361356 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.472538 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac390d1a-0d42-4297-9b1d-c415c14d2858" containerID="c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f" exitCode=0 Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.472608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" event={"ID":"ac390d1a-0d42-4297-9b1d-c415c14d2858","Type":"ContainerDied","Data":"c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.472613 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.472635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-f67zh" event={"ID":"ac390d1a-0d42-4297-9b1d-c415c14d2858","Type":"ContainerDied","Data":"f59b15aa7478fe1c2e106f685b73930fd6f5e0c3bb7270a3bc546d81bee0a717"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.472651 4780 scope.go:117] "RemoveContainer" containerID="c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.476021 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerStarted","Data":"6c6c1aad1ca68813e7f1449f988a927a5f306e3156dd8211479df2e256d6cb99"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.493806 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7zgw" event={"ID":"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d","Type":"ContainerStarted","Data":"915f81b7075274d01be1a9a53b49c578e0ee7d68fa4534042cdbf87c6173e436"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.505109 4780 generic.go:334] "Generic (PLEG): container finished" podID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerID="89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100" exitCode=0 Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.505175 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-k8fms" event={"ID":"c93d79e3-0727-4aef-b65d-98d315a9957b","Type":"ContainerDied","Data":"89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.505207 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-k8fms" event={"ID":"c93d79e3-0727-4aef-b65d-98d315a9957b","Type":"ContainerStarted","Data":"a4b9410d9beaf4bbb7ecea5bca5cd07bc67fa1b51615681f16c3a70dac27f5f7"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.516401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerStarted","Data":"35b6794e78de0a87bae50d63a247abe07d4d539322d0fa6f794aee95d5715ca2"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.525412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zjnch" event={"ID":"cbc03068-08f3-4ded-9521-145d162f2053","Type":"ContainerStarted","Data":"427853e2d4b624fa2154afd47859b01e1722a416fd0fd84bb3611fe678f3b907"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.526283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4s9pd" event={"ID":"b0f0d73a-fdfa-471b-92d4-4433cff2bda8","Type":"ContainerStarted","Data":"00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.530939 4780 scope.go:117] "RemoveContainer" containerID="c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f" Feb 19 08:40:01 crc kubenswrapper[4780]: E0219 08:40:01.531278 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f\": container with ID starting with c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f not found: ID does not exist" containerID="c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.531303 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f"} err="failed to get container status \"c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f\": rpc error: code = NotFound desc = could not find container \"c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f\": container with ID starting with c3723dc1c968af7f2f272c7a236e4531bf8b65795e7d34e1588dcd38a20b8f8f not found: ID does not exist" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.531449 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf6dk" event={"ID":"0b200855-daef-430e-8967-62c2e51acc86","Type":"ContainerStarted","Data":"b507eaf6aa48bfdaa9db8ae94ab3426bff9bdb3667fb4f653a3340a6b2340be5"} Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.552312 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l7zgw" podStartSLOduration=3.552296075 podStartE2EDuration="3.552296075s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:01.519392222 +0000 UTC m=+1144.263049671" watchObservedRunningTime="2026-02-19 08:40:01.552296075 +0000 UTC m=+1144.295953524" Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.672946 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.691279 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-f67zh"] Feb 19 08:40:01 crc kubenswrapper[4780]: I0219 08:40:01.946722 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac390d1a-0d42-4297-9b1d-c415c14d2858" path="/var/lib/kubelet/pods/ac390d1a-0d42-4297-9b1d-c415c14d2858/volumes" Feb 19 08:40:02 crc kubenswrapper[4780]: I0219 08:40:02.543923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerStarted","Data":"5fc888778ff62785a9d5ac72216467e3916ae67b86c8db0d1ff60070f995ed8a"} Feb 19 08:40:03 crc kubenswrapper[4780]: I0219 08:40:03.556557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerStarted","Data":"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8"} Feb 19 08:40:03 crc kubenswrapper[4780]: I0219 08:40:03.565535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-k8fms" event={"ID":"c93d79e3-0727-4aef-b65d-98d315a9957b","Type":"ContainerStarted","Data":"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0"} Feb 19 08:40:03 crc kubenswrapper[4780]: I0219 08:40:03.566893 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:40:03 crc kubenswrapper[4780]: I0219 08:40:03.589496 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67dccc895-k8fms" podStartSLOduration=4.589467906 podStartE2EDuration="4.589467906s" podCreationTimestamp="2026-02-19 08:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:03.581311894 +0000 UTC m=+1146.324969343" watchObservedRunningTime="2026-02-19 08:40:03.589467906 +0000 UTC m=+1146.333125355" Feb 19 08:40:04 crc kubenswrapper[4780]: I0219 08:40:04.580795 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerStarted","Data":"bb3a1a921bf7425424e3f0443cf9d10a9e6e1f5c4cbbe0e650c7ed2c0627fb38"} Feb 19 08:40:04 crc kubenswrapper[4780]: I0219 08:40:04.581054 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-log" containerID="cri-o://5fc888778ff62785a9d5ac72216467e3916ae67b86c8db0d1ff60070f995ed8a" gracePeriod=30 Feb 19 08:40:04 crc kubenswrapper[4780]: I0219 08:40:04.581136 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-httpd" containerID="cri-o://bb3a1a921bf7425424e3f0443cf9d10a9e6e1f5c4cbbe0e650c7ed2c0627fb38" gracePeriod=30 Feb 19 08:40:04 crc kubenswrapper[4780]: I0219 08:40:04.612519 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.612500367 podStartE2EDuration="6.612500367s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:04.601226248 +0000 UTC m=+1147.344883697" watchObservedRunningTime="2026-02-19 08:40:04.612500367 +0000 UTC m=+1147.356157806" Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.595755 4780 generic.go:334] "Generic (PLEG): container finished" podID="27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" containerID="915f81b7075274d01be1a9a53b49c578e0ee7d68fa4534042cdbf87c6173e436" exitCode=0 Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.596074 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7zgw" event={"ID":"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d","Type":"ContainerDied","Data":"915f81b7075274d01be1a9a53b49c578e0ee7d68fa4534042cdbf87c6173e436"} Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.600701 4780 generic.go:334] "Generic (PLEG): container finished" podID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerID="bb3a1a921bf7425424e3f0443cf9d10a9e6e1f5c4cbbe0e650c7ed2c0627fb38" exitCode=0 Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.600738 4780 generic.go:334] "Generic (PLEG): container finished" podID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerID="5fc888778ff62785a9d5ac72216467e3916ae67b86c8db0d1ff60070f995ed8a" exitCode=143 Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.600781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerDied","Data":"bb3a1a921bf7425424e3f0443cf9d10a9e6e1f5c4cbbe0e650c7ed2c0627fb38"} Feb 19 08:40:05 crc kubenswrapper[4780]: I0219 08:40:05.600845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerDied","Data":"5fc888778ff62785a9d5ac72216467e3916ae67b86c8db0d1ff60070f995ed8a"} Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.336033 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.336420 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.336477 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.337390 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.337466 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504" gracePeriod=600 Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.612790 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504" exitCode=0 Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.612992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504"} Feb 19 08:40:06 crc kubenswrapper[4780]: I0219 08:40:06.613035 4780 scope.go:117] "RemoveContainer" containerID="d86287631278548b0c80b1e05e352ed6295219281318df1c4880abec1eb44525" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.881169 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.911877 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n4kh\" (UniqueName: \"kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.911940 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912116 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912150 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912209 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.912254 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs\") pod \"c75c6859-9384-4c7b-8ef0-22f987fa9467\" (UID: \"c75c6859-9384-4c7b-8ef0-22f987fa9467\") " Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.917383 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs" (OuterVolumeSpecName: "logs") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.918783 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.921709 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts" (OuterVolumeSpecName: "scripts") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.924800 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh" (OuterVolumeSpecName: "kube-api-access-5n4kh") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "kube-api-access-5n4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.926166 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.950566 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.986609 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data" (OuterVolumeSpecName: "config-data") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:08 crc kubenswrapper[4780]: I0219 08:40:08.995551 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c75c6859-9384-4c7b-8ef0-22f987fa9467" (UID: "c75c6859-9384-4c7b-8ef0-22f987fa9467"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013414 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013460 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n4kh\" (UniqueName: \"kubernetes.io/projected/c75c6859-9384-4c7b-8ef0-22f987fa9467-kube-api-access-5n4kh\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013477 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013490 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013504 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013547 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013564 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c6859-9384-4c7b-8ef0-22f987fa9467-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.013580 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75c6859-9384-4c7b-8ef0-22f987fa9467-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.042821 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.115617 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.601343 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.668253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c75c6859-9384-4c7b-8ef0-22f987fa9467","Type":"ContainerDied","Data":"35b6794e78de0a87bae50d63a247abe07d4d539322d0fa6f794aee95d5715ca2"} Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.668434 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.670780 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.671048 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" containerID="cri-o://0626ef430b1b230e5dc5f68b1d95f90d7488328b8997e6848ba98c563993a58e" gracePeriod=10 Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.838531 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.863231 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.897657 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.937881 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:09 crc kubenswrapper[4780]: E0219 08:40:09.938291 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-log" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938306 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-log" Feb 19 08:40:09 crc kubenswrapper[4780]: E0219 08:40:09.938319 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac390d1a-0d42-4297-9b1d-c415c14d2858" containerName="init" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938325 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac390d1a-0d42-4297-9b1d-c415c14d2858" containerName="init" Feb 19 08:40:09 crc kubenswrapper[4780]: E0219 08:40:09.938336 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-httpd" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938341 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-httpd" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938491 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac390d1a-0d42-4297-9b1d-c415c14d2858" containerName="init" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938508 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-log" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.938523 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" containerName="glance-httpd" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.939471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.943571 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.943804 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.983892 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75c6859-9384-4c7b-8ef0-22f987fa9467" path="/var/lib/kubelet/pods/c75c6859-9384-4c7b-8ef0-22f987fa9467/volumes" Feb 19 08:40:09 crc kubenswrapper[4780]: I0219 08:40:09.984744 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.142758 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.142823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4glg\" (UniqueName: \"kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.142846 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.142864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.142938 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.143184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.143226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.143306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4glg\" (UniqueName: \"kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245195 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245266 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245289 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245567 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.245897 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.246004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.252101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.252677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.253347 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.254720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.264898 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4glg\" (UniqueName: \"kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.288941 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.579684 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.684560 4780 generic.go:334] "Generic (PLEG): container finished" podID="baead610-2c43-4e62-bb43-70afcade3d0b" containerID="0626ef430b1b230e5dc5f68b1d95f90d7488328b8997e6848ba98c563993a58e" exitCode=0 Feb 19 08:40:10 crc kubenswrapper[4780]: I0219 08:40:10.684607 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerDied","Data":"0626ef430b1b230e5dc5f68b1d95f90d7488328b8997e6848ba98c563993a58e"} Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.817818 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976666 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976807 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976881 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66lj\" (UniqueName: \"kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.976998 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys\") pod \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\" (UID: \"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d\") " Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.990809 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts" (OuterVolumeSpecName: "scripts") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.990943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj" (OuterVolumeSpecName: "kube-api-access-l66lj") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "kube-api-access-l66lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:11 crc kubenswrapper[4780]: I0219 08:40:11.996306 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.000486 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.013160 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data" (OuterVolumeSpecName: "config-data") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.016367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" (UID: "27fc0c55-a69b-4b4c-a6be-ac973c8bb23d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.080813 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.080867 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.080881 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.080902 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66lj\" (UniqueName: \"kubernetes.io/projected/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-kube-api-access-l66lj\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.080975 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.081009 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.700551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7zgw" event={"ID":"27fc0c55-a69b-4b4c-a6be-ac973c8bb23d","Type":"ContainerDied","Data":"463f7a315372d742540d3e23bc19cec621947de1c1a74823c6bc0ba15fab441d"} Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.700590 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463f7a315372d742540d3e23bc19cec621947de1c1a74823c6bc0ba15fab441d" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.700637 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7zgw" Feb 19 08:40:12 crc kubenswrapper[4780]: I0219 08:40:12.996278 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l7zgw"] Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.003524 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l7zgw"] Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.106949 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tdzz8"] Feb 19 08:40:13 crc kubenswrapper[4780]: E0219 08:40:13.107318 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" containerName="keystone-bootstrap" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.107331 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" containerName="keystone-bootstrap" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.107542 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" containerName="keystone-bootstrap" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.108098 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.110521 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.110745 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.116068 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gt54x" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.116520 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.116736 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.119183 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tdzz8"] Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.201709 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.201775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.201815 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.201950 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.201988 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.202025 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9cz\" (UniqueName: \"kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303326 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb9cz\" (UniqueName: \"kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303385 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303418 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.303545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.309185 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.309267 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.309756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.310082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.311819 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.334212 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb9cz\" (UniqueName: \"kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz\") pod \"keystone-bootstrap-tdzz8\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.428690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:13 crc kubenswrapper[4780]: I0219 08:40:13.948649 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fc0c55-a69b-4b4c-a6be-ac973c8bb23d" path="/var/lib/kubelet/pods/27fc0c55-a69b-4b4c-a6be-ac973c8bb23d/volumes" Feb 19 08:40:14 crc kubenswrapper[4780]: I0219 08:40:14.876369 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 19 08:40:19 crc kubenswrapper[4780]: I0219 08:40:19.876512 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 19 08:40:19 crc kubenswrapper[4780]: I0219 08:40:19.877171 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:40:24 crc kubenswrapper[4780]: E0219 08:40:24.902051 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec" Feb 19 08:40:24 crc kubenswrapper[4780]: E0219 08:40:24.903321 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29p46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4s9pd_openstack(b0f0d73a-fdfa-471b-92d4-4433cff2bda8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:40:24 crc kubenswrapper[4780]: E0219 08:40:24.905167 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4s9pd" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" Feb 19 08:40:25 crc kubenswrapper[4780]: E0219 08:40:25.837204 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec\\\"\"" pod="openstack/barbican-db-sync-4s9pd" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" Feb 19 08:40:25 crc kubenswrapper[4780]: I0219 08:40:25.960195 4780 scope.go:117] "RemoveContainer" containerID="bb3a1a921bf7425424e3f0443cf9d10a9e6e1f5c4cbbe0e650c7ed2c0627fb38" Feb 19 08:40:25 crc kubenswrapper[4780]: E0219 08:40:25.998326 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" Feb 19 08:40:25 crc kubenswrapper[4780]: E0219 08:40:25.998520 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvj5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xdwdm_openstack(937a02e9-aead-48c0-9c00-28a327719c18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:40:26 crc kubenswrapper[4780]: E0219 08:40:25.999984 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xdwdm" podUID="937a02e9-aead-48c0-9c00-28a327719c18" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.132067 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.182779 4780 scope.go:117] "RemoveContainer" containerID="5fc888778ff62785a9d5ac72216467e3916ae67b86c8db0d1ff60070f995ed8a" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244169 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244234 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wnmv\" (UniqueName: \"kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244266 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244355 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.244430 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config\") pod \"baead610-2c43-4e62-bb43-70afcade3d0b\" (UID: \"baead610-2c43-4e62-bb43-70afcade3d0b\") " Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.251385 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv" (OuterVolumeSpecName: "kube-api-access-5wnmv") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "kube-api-access-5wnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.346445 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wnmv\" (UniqueName: \"kubernetes.io/projected/baead610-2c43-4e62-bb43-70afcade3d0b-kube-api-access-5wnmv\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.349153 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.351376 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.351469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.354109 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.368872 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config" (OuterVolumeSpecName: "config") pod "baead610-2c43-4e62-bb43-70afcade3d0b" (UID: "baead610-2c43-4e62-bb43-70afcade3d0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.448341 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.448376 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.448388 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.448398 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.448409 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/baead610-2c43-4e62-bb43-70afcade3d0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.546055 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.640080 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tdzz8"] Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.837732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tdzz8" event={"ID":"2196ec7a-fea4-422a-8c7d-0350b6dd19c0","Type":"ContainerStarted","Data":"73d5cbd13217b63c486f4e7d0145f416329fcfa8071559edf5d00c330057800d"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.838005 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tdzz8" event={"ID":"2196ec7a-fea4-422a-8c7d-0350b6dd19c0","Type":"ContainerStarted","Data":"13a9250458e16bf79b5a7f91d21f6d98755e84c6ec91bb13f6b1f02a92f84665"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.840035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zjnch" event={"ID":"cbc03068-08f3-4ded-9521-145d162f2053","Type":"ContainerStarted","Data":"c3921f149371697efdafbcc4de7e9b64c08a711186d72940c711346e6a977776"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.843367 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerStarted","Data":"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.846397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.847595 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerStarted","Data":"672795fc6c95ba9703cb06d2def7f277aaf6d67de1c4ed89e49b60fea548f47e"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.849413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerStarted","Data":"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.849459 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-log" containerID="cri-o://877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" gracePeriod=30 Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.849523 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-httpd" containerID="cri-o://e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" gracePeriod=30 Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.859794 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tdzz8" podStartSLOduration=13.85977772 podStartE2EDuration="13.85977772s" podCreationTimestamp="2026-02-19 08:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:26.85696313 +0000 UTC m=+1169.600620579" watchObservedRunningTime="2026-02-19 08:40:26.85977772 +0000 UTC m=+1169.603435169" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.861190 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" event={"ID":"baead610-2c43-4e62-bb43-70afcade3d0b","Type":"ContainerDied","Data":"e40f07fd025aeab923f6237c17000080102b3d6b0d51d581020527a594509944"} Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.861230 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.861245 4780 scope.go:117] "RemoveContainer" containerID="0626ef430b1b230e5dc5f68b1d95f90d7488328b8997e6848ba98c563993a58e" Feb 19 08:40:26 crc kubenswrapper[4780]: E0219 08:40:26.863441 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b\\\"\"" pod="openstack/cinder-db-sync-xdwdm" podUID="937a02e9-aead-48c0-9c00-28a327719c18" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.896623 4780 scope.go:117] "RemoveContainer" containerID="0d199aa5c38e2d8cc1a3e238e3d85a2e2aff1858097f236718d37a03b20ef52c" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.897936 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zjnch" podStartSLOduration=2.300160503 podStartE2EDuration="27.897908992s" podCreationTimestamp="2026-02-19 08:39:59 +0000 UTC" firstStartedPulling="2026-02-19 08:40:00.420328262 +0000 UTC m=+1143.163985701" lastFinishedPulling="2026-02-19 08:40:26.018076741 +0000 UTC m=+1168.761734190" observedRunningTime="2026-02-19 08:40:26.890608582 +0000 UTC m=+1169.634266051" watchObservedRunningTime="2026-02-19 08:40:26.897908992 +0000 UTC m=+1169.641566441" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.952085 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=28.95206361 podStartE2EDuration="28.95206361s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:26.922424008 +0000 UTC m=+1169.666081457" watchObservedRunningTime="2026-02-19 08:40:26.95206361 +0000 UTC m=+1169.695721059" Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.967976 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:40:26 crc kubenswrapper[4780]: I0219 08:40:26.973841 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-cmlpn"] Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.747687 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873076 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873250 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873283 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873308 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873502 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhbq5\" (UniqueName: \"kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5\") pod \"465a8543-c177-44ed-a7ec-6a5d108d9537\" (UID: \"465a8543-c177-44ed-a7ec-6a5d108d9537\") " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.873535 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs" (OuterVolumeSpecName: "logs") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.874166 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.874569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.876684 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.877689 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts" (OuterVolumeSpecName: "scripts") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.878565 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5" (OuterVolumeSpecName: "kube-api-access-dhbq5") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "kube-api-access-dhbq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.884550 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerStarted","Data":"e4c6412a78837024fd2d94325b0f9362535152ef61396c7e5123de2e2c43a794"} Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.884594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerStarted","Data":"626adbcd162eb220882fdd6339ba5675a7ad356ad4f45ccb34ceb5c9adeeeec3"} Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887566 4780 generic.go:334] "Generic (PLEG): container finished" podID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerID="e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" exitCode=143 Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887605 4780 generic.go:334] "Generic (PLEG): container finished" podID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerID="877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" exitCode=143 Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerDied","Data":"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae"} Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887898 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerDied","Data":"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8"} Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465a8543-c177-44ed-a7ec-6a5d108d9537","Type":"ContainerDied","Data":"6c6c1aad1ca68813e7f1449f988a927a5f306e3156dd8211479df2e256d6cb99"} Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.887934 4780 scope.go:117] "RemoveContainer" containerID="e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.888033 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.922353 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.922332537 podStartE2EDuration="18.922332537s" podCreationTimestamp="2026-02-19 08:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:27.914566156 +0000 UTC m=+1170.658223645" watchObservedRunningTime="2026-02-19 08:40:27.922332537 +0000 UTC m=+1170.665989996" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.926247 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.931930 4780 scope.go:117] "RemoveContainer" containerID="877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.958055 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" path="/var/lib/kubelet/pods/baead610-2c43-4e62-bb43-70afcade3d0b/volumes" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.959664 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.960962 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data" (OuterVolumeSpecName: "config-data") pod "465a8543-c177-44ed-a7ec-6a5d108d9537" (UID: "465a8543-c177-44ed-a7ec-6a5d108d9537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.967767 4780 scope.go:117] "RemoveContainer" containerID="e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" Feb 19 08:40:27 crc kubenswrapper[4780]: E0219 08:40:27.968306 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae\": container with ID starting with e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae not found: ID does not exist" containerID="e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.968345 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae"} err="failed to get container status \"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae\": rpc error: code = NotFound desc = could not find container \"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae\": container with ID starting with e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae not found: ID does not exist" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.968394 4780 scope.go:117] "RemoveContainer" containerID="877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" Feb 19 08:40:27 crc kubenswrapper[4780]: E0219 08:40:27.969349 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8\": container with ID starting with 877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8 not found: ID does not exist" containerID="877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.969382 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8"} err="failed to get container status \"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8\": rpc error: code = NotFound desc = could not find container \"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8\": container with ID starting with 877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8 not found: ID does not exist" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.969406 4780 scope.go:117] "RemoveContainer" containerID="e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.970018 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae"} err="failed to get container status \"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae\": rpc error: code = NotFound desc = could not find container \"e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae\": container with ID starting with e7dd1f15b73f6914428976d6c6917b16e286c8ca1b30ec4cc4c31f3a492fefae not found: ID does not exist" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.970036 4780 scope.go:117] "RemoveContainer" containerID="877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.970309 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8"} err="failed to get container status \"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8\": rpc error: code = NotFound desc = could not find container \"877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8\": container with ID starting with 877cd3e86d975387c2a1b649f3bfd4e35c03901df908a1ea1a78df02303769e8 not found: ID does not exist" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979108 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465a8543-c177-44ed-a7ec-6a5d108d9537-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979170 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979185 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhbq5\" (UniqueName: \"kubernetes.io/projected/465a8543-c177-44ed-a7ec-6a5d108d9537-kube-api-access-dhbq5\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979214 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979226 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979237 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:27 crc kubenswrapper[4780]: I0219 08:40:27.979249 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465a8543-c177-44ed-a7ec-6a5d108d9537-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.022219 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.080600 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.226397 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.234833 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.259217 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:28 crc kubenswrapper[4780]: E0219 08:40:28.259826 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="init" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.259956 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="init" Feb 19 08:40:28 crc kubenswrapper[4780]: E0219 08:40:28.260032 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260096 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" Feb 19 08:40:28 crc kubenswrapper[4780]: E0219 08:40:28.260199 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-httpd" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260266 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-httpd" Feb 19 08:40:28 crc kubenswrapper[4780]: E0219 08:40:28.260343 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-log" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260417 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-log" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260705 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-log" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260794 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.260870 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" containerName="glance-httpd" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.261923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.264388 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.264760 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.271682 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385198 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385263 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385320 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsc6\" (UniqueName: \"kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385374 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385402 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.385608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.487847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488556 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488582 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsc6\" (UniqueName: \"kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488669 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488720 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488793 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.488960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.489490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.489785 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.494088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.494650 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.496799 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.509164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.518342 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsc6\" (UniqueName: \"kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.536346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.584725 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:28 crc kubenswrapper[4780]: I0219 08:40:28.898093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerStarted","Data":"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3"} Feb 19 08:40:29 crc kubenswrapper[4780]: I0219 08:40:29.130778 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:40:29 crc kubenswrapper[4780]: I0219 08:40:29.878673 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-cmlpn" podUID="baead610-2c43-4e62-bb43-70afcade3d0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 19 08:40:29 crc kubenswrapper[4780]: I0219 08:40:29.909726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerStarted","Data":"d880363aee9763fc6a800e80dfeed449e140b3a7295080df9834ff9a3bd002db"} Feb 19 08:40:29 crc kubenswrapper[4780]: I0219 08:40:29.909803 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerStarted","Data":"31cf59dc8ddde18777a881c43db8495a39fdb1af21796decc53ae9c18bad8841"} Feb 19 08:40:29 crc kubenswrapper[4780]: I0219 08:40:29.954699 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465a8543-c177-44ed-a7ec-6a5d108d9537" path="/var/lib/kubelet/pods/465a8543-c177-44ed-a7ec-6a5d108d9537/volumes" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.581103 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.581506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.621375 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.662771 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.923334 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 08:40:30 crc kubenswrapper[4780]: I0219 08:40:30.923520 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 08:40:32 crc kubenswrapper[4780]: I0219 08:40:32.945332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerStarted","Data":"b0d24751d84a3171fbef0fecf9836682ec7b58e2857bc3f42252296301bb363d"} Feb 19 08:40:32 crc kubenswrapper[4780]: I0219 08:40:32.979739 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.979716573 podStartE2EDuration="4.979716573s" podCreationTimestamp="2026-02-19 08:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:32.96867182 +0000 UTC m=+1175.712329299" watchObservedRunningTime="2026-02-19 08:40:32.979716573 +0000 UTC m=+1175.723374022" Feb 19 08:40:33 crc kubenswrapper[4780]: I0219 08:40:33.961234 4780 generic.go:334] "Generic (PLEG): container finished" podID="2196ec7a-fea4-422a-8c7d-0350b6dd19c0" containerID="73d5cbd13217b63c486f4e7d0145f416329fcfa8071559edf5d00c330057800d" exitCode=0 Feb 19 08:40:33 crc kubenswrapper[4780]: I0219 08:40:33.961340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tdzz8" event={"ID":"2196ec7a-fea4-422a-8c7d-0350b6dd19c0","Type":"ContainerDied","Data":"73d5cbd13217b63c486f4e7d0145f416329fcfa8071559edf5d00c330057800d"} Feb 19 08:40:34 crc kubenswrapper[4780]: E0219 08:40:34.930707 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbc03068_08f3_4ded_9521_145d162f2053.slice/crio-c3921f149371697efdafbcc4de7e9b64c08a711186d72940c711346e6a977776.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:40:34 crc kubenswrapper[4780]: I0219 08:40:34.990139 4780 generic.go:334] "Generic (PLEG): container finished" podID="cbc03068-08f3-4ded-9521-145d162f2053" containerID="c3921f149371697efdafbcc4de7e9b64c08a711186d72940c711346e6a977776" exitCode=0 Feb 19 08:40:34 crc kubenswrapper[4780]: I0219 08:40:34.990199 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zjnch" event={"ID":"cbc03068-08f3-4ded-9521-145d162f2053","Type":"ContainerDied","Data":"c3921f149371697efdafbcc4de7e9b64c08a711186d72940c711346e6a977776"} Feb 19 08:40:34 crc kubenswrapper[4780]: I0219 08:40:34.993115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerStarted","Data":"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6"} Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.311484 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.420647 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.420745 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.420789 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.420821 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb9cz\" (UniqueName: \"kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.420989 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.421023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle\") pod \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\" (UID: \"2196ec7a-fea4-422a-8c7d-0350b6dd19c0\") " Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.427640 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz" (OuterVolumeSpecName: "kube-api-access-cb9cz") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "kube-api-access-cb9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.428151 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.429561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.430801 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts" (OuterVolumeSpecName: "scripts") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.451499 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data" (OuterVolumeSpecName: "config-data") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.458335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2196ec7a-fea4-422a-8c7d-0350b6dd19c0" (UID: "2196ec7a-fea4-422a-8c7d-0350b6dd19c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.522992 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.523030 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.523040 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.523055 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb9cz\" (UniqueName: \"kubernetes.io/projected/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-kube-api-access-cb9cz\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.523066 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:35 crc kubenswrapper[4780]: I0219 08:40:35.523076 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2196ec7a-fea4-422a-8c7d-0350b6dd19c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.009375 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tdzz8" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.009363 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tdzz8" event={"ID":"2196ec7a-fea4-422a-8c7d-0350b6dd19c0","Type":"ContainerDied","Data":"13a9250458e16bf79b5a7f91d21f6d98755e84c6ec91bb13f6b1f02a92f84665"} Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.009420 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13a9250458e16bf79b5a7f91d21f6d98755e84c6ec91bb13f6b1f02a92f84665" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.073177 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:40:36 crc kubenswrapper[4780]: E0219 08:40:36.073724 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2196ec7a-fea4-422a-8c7d-0350b6dd19c0" containerName="keystone-bootstrap" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.073740 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2196ec7a-fea4-422a-8c7d-0350b6dd19c0" containerName="keystone-bootstrap" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.073900 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2196ec7a-fea4-422a-8c7d-0350b6dd19c0" containerName="keystone-bootstrap" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.074481 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079218 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079330 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079591 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gt54x" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079739 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079486 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.079774 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.099938 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133400 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133506 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133632 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snm4\" (UniqueName: \"kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133676 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.133700 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.235982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236068 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236110 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236154 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snm4\" (UniqueName: \"kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236185 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236206 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.236253 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.256784 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.263502 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.263930 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.265589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.274923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snm4\" (UniqueName: \"kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.275089 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.275350 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.275734 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs\") pod \"keystone-68c564b849-pqj6g\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.399804 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.507661 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zjnch" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.542726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbp2f\" (UniqueName: \"kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f\") pod \"cbc03068-08f3-4ded-9521-145d162f2053\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.542868 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data\") pod \"cbc03068-08f3-4ded-9521-145d162f2053\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.542905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle\") pod \"cbc03068-08f3-4ded-9521-145d162f2053\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.542944 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts\") pod \"cbc03068-08f3-4ded-9521-145d162f2053\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.542988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs\") pod \"cbc03068-08f3-4ded-9521-145d162f2053\" (UID: \"cbc03068-08f3-4ded-9521-145d162f2053\") " Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.543720 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs" (OuterVolumeSpecName: "logs") pod "cbc03068-08f3-4ded-9521-145d162f2053" (UID: "cbc03068-08f3-4ded-9521-145d162f2053"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.549765 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f" (OuterVolumeSpecName: "kube-api-access-lbp2f") pod "cbc03068-08f3-4ded-9521-145d162f2053" (UID: "cbc03068-08f3-4ded-9521-145d162f2053"). InnerVolumeSpecName "kube-api-access-lbp2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.550182 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts" (OuterVolumeSpecName: "scripts") pod "cbc03068-08f3-4ded-9521-145d162f2053" (UID: "cbc03068-08f3-4ded-9521-145d162f2053"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.568038 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data" (OuterVolumeSpecName: "config-data") pod "cbc03068-08f3-4ded-9521-145d162f2053" (UID: "cbc03068-08f3-4ded-9521-145d162f2053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.573952 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbc03068-08f3-4ded-9521-145d162f2053" (UID: "cbc03068-08f3-4ded-9521-145d162f2053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.644718 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbp2f\" (UniqueName: \"kubernetes.io/projected/cbc03068-08f3-4ded-9521-145d162f2053-kube-api-access-lbp2f\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.644745 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.644754 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.644762 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc03068-08f3-4ded-9521-145d162f2053-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.644770 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc03068-08f3-4ded-9521-145d162f2053-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:36 crc kubenswrapper[4780]: I0219 08:40:36.922833 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:40:36 crc kubenswrapper[4780]: W0219 08:40:36.930423 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3467470_e6f9_49c1_b49f_8cea159e5af9.slice/crio-d7eae6eb2d4fecffd3005969e0bdbe1acb23278d700df6ee9402a3c32e8ea861 WatchSource:0}: Error finding container d7eae6eb2d4fecffd3005969e0bdbe1acb23278d700df6ee9402a3c32e8ea861: Status 404 returned error can't find the container with id d7eae6eb2d4fecffd3005969e0bdbe1acb23278d700df6ee9402a3c32e8ea861 Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.026268 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zjnch" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.026263 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zjnch" event={"ID":"cbc03068-08f3-4ded-9521-145d162f2053","Type":"ContainerDied","Data":"427853e2d4b624fa2154afd47859b01e1722a416fd0fd84bb3611fe678f3b907"} Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.026495 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="427853e2d4b624fa2154afd47859b01e1722a416fd0fd84bb3611fe678f3b907" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.028231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68c564b849-pqj6g" event={"ID":"e3467470-e6f9-49c1-b49f-8cea159e5af9","Type":"ContainerStarted","Data":"d7eae6eb2d4fecffd3005969e0bdbe1acb23278d700df6ee9402a3c32e8ea861"} Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.222299 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:40:37 crc kubenswrapper[4780]: E0219 08:40:37.222866 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc03068-08f3-4ded-9521-145d162f2053" containerName="placement-db-sync" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.222882 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc03068-08f3-4ded-9521-145d162f2053" containerName="placement-db-sync" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.223056 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc03068-08f3-4ded-9521-145d162f2053" containerName="placement-db-sync" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.226749 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.232818 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.232819 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.233180 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.233349 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.233626 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7h4vj" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.238604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260463 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260542 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260575 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzsl\" (UniqueName: \"kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260645 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260668 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.260749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362199 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362349 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzsl\" (UniqueName: \"kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.362435 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.363051 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.370739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.375284 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.377214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.378469 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.379457 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.387634 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzsl\" (UniqueName: \"kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl\") pod \"placement-78d56d997b-gx5gk\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:37 crc kubenswrapper[4780]: I0219 08:40:37.601658 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.036505 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68c564b849-pqj6g" event={"ID":"e3467470-e6f9-49c1-b49f-8cea159e5af9","Type":"ContainerStarted","Data":"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918"} Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.037264 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.058877 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-68c564b849-pqj6g" podStartSLOduration=2.058857276 podStartE2EDuration="2.058857276s" podCreationTimestamp="2026-02-19 08:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:38.054098338 +0000 UTC m=+1180.797755797" watchObservedRunningTime="2026-02-19 08:40:38.058857276 +0000 UTC m=+1180.802514735" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.585187 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.585222 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.614017 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.627434 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:38 crc kubenswrapper[4780]: W0219 08:40:38.703382 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80168270_a6db_4ef2_833b_5d2eb2781779.slice/crio-f5ef85bffb1dd240eec1f579c27150411a5232b23cf1f578d27c2de676195cbf WatchSource:0}: Error finding container f5ef85bffb1dd240eec1f579c27150411a5232b23cf1f578d27c2de676195cbf: Status 404 returned error can't find the container with id f5ef85bffb1dd240eec1f579c27150411a5232b23cf1f578d27c2de676195cbf Feb 19 08:40:38 crc kubenswrapper[4780]: I0219 08:40:38.708153 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:40:39 crc kubenswrapper[4780]: I0219 08:40:39.047744 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerStarted","Data":"4a2deac91a19b42884cc5219eea805083e84f812e11c4c1c5d92ca54a8646ebf"} Feb 19 08:40:39 crc kubenswrapper[4780]: I0219 08:40:39.048228 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerStarted","Data":"f5ef85bffb1dd240eec1f579c27150411a5232b23cf1f578d27c2de676195cbf"} Feb 19 08:40:39 crc kubenswrapper[4780]: I0219 08:40:39.048259 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:39 crc kubenswrapper[4780]: I0219 08:40:39.048277 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:41 crc kubenswrapper[4780]: I0219 08:40:41.123000 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:41 crc kubenswrapper[4780]: I0219 08:40:41.123622 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:40:41 crc kubenswrapper[4780]: I0219 08:40:41.350451 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 08:40:42 crc kubenswrapper[4780]: I0219 08:40:42.628961 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 08:40:42 crc kubenswrapper[4780]: I0219 08:40:42.846158 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.139764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerStarted","Data":"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba"} Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.141073 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-central-agent" containerID="cri-o://a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6" gracePeriod=30 Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.141450 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.141783 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="proxy-httpd" containerID="cri-o://21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba" gracePeriod=30 Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.142030 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="sg-core" containerID="cri-o://f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6" gracePeriod=30 Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.142069 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-notification-agent" containerID="cri-o://510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3" gracePeriod=30 Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.151548 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4s9pd" event={"ID":"b0f0d73a-fdfa-471b-92d4-4433cff2bda8","Type":"ContainerStarted","Data":"dbbf16158178fa041adbf5533a0fdbe89cd97e4da8fe0830082cd1e4d2ba6f56"} Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.154467 4780 generic.go:334] "Generic (PLEG): container finished" podID="0b200855-daef-430e-8967-62c2e51acc86" containerID="b507eaf6aa48bfdaa9db8ae94ab3426bff9bdb3667fb4f653a3340a6b2340be5" exitCode=0 Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.154585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf6dk" event={"ID":"0b200855-daef-430e-8967-62c2e51acc86","Type":"ContainerDied","Data":"b507eaf6aa48bfdaa9db8ae94ab3426bff9bdb3667fb4f653a3340a6b2340be5"} Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.156664 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xdwdm" event={"ID":"937a02e9-aead-48c0-9c00-28a327719c18","Type":"ContainerStarted","Data":"c1e464177020365ce834cf4619a79c16908295623f830da258cffc9a74c3a004"} Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.162546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerStarted","Data":"c021cdf9bb1cdb9fed0336ae04d70630c86309a60b2593d61c63d06c8b046dcd"} Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.170326 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.170377 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.174452 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.345365232 podStartE2EDuration="48.174434104s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="2026-02-19 08:40:00.25351171 +0000 UTC m=+1142.997169159" lastFinishedPulling="2026-02-19 08:40:45.082580582 +0000 UTC m=+1187.826238031" observedRunningTime="2026-02-19 08:40:46.17106541 +0000 UTC m=+1188.914722920" watchObservedRunningTime="2026-02-19 08:40:46.174434104 +0000 UTC m=+1188.918091553" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.203194 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78d56d997b-gx5gk" podStartSLOduration=9.203171514 podStartE2EDuration="9.203171514s" podCreationTimestamp="2026-02-19 08:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:46.189892916 +0000 UTC m=+1188.933550405" watchObservedRunningTime="2026-02-19 08:40:46.203171514 +0000 UTC m=+1188.946828983" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.218311 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4s9pd" podStartSLOduration=2.5633695579999998 podStartE2EDuration="47.218288937s" podCreationTimestamp="2026-02-19 08:39:59 +0000 UTC" firstStartedPulling="2026-02-19 08:40:00.42752273 +0000 UTC m=+1143.171180179" lastFinishedPulling="2026-02-19 08:40:45.082442109 +0000 UTC m=+1187.826099558" observedRunningTime="2026-02-19 08:40:46.214849612 +0000 UTC m=+1188.958507061" watchObservedRunningTime="2026-02-19 08:40:46.218288937 +0000 UTC m=+1188.961946386" Feb 19 08:40:46 crc kubenswrapper[4780]: I0219 08:40:46.245438 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xdwdm" podStartSLOduration=3.4407759 podStartE2EDuration="48.245416628s" podCreationTimestamp="2026-02-19 08:39:58 +0000 UTC" firstStartedPulling="2026-02-19 08:40:00.278720223 +0000 UTC m=+1143.022377672" lastFinishedPulling="2026-02-19 08:40:45.083360931 +0000 UTC m=+1187.827018400" observedRunningTime="2026-02-19 08:40:46.238004265 +0000 UTC m=+1188.981661724" watchObservedRunningTime="2026-02-19 08:40:46.245416628 +0000 UTC m=+1188.989074077" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.173638 4780 generic.go:334] "Generic (PLEG): container finished" podID="358e3824-b47f-4b03-939b-de8e27734e40" containerID="21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba" exitCode=0 Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.173952 4780 generic.go:334] "Generic (PLEG): container finished" podID="358e3824-b47f-4b03-939b-de8e27734e40" containerID="f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6" exitCode=2 Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.173965 4780 generic.go:334] "Generic (PLEG): container finished" podID="358e3824-b47f-4b03-939b-de8e27734e40" containerID="a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6" exitCode=0 Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.173740 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerDied","Data":"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba"} Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.174035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerDied","Data":"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6"} Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.174058 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerDied","Data":"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6"} Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.480064 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.480975 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.571675 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle\") pod \"0b200855-daef-430e-8967-62c2e51acc86\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.571923 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config\") pod \"0b200855-daef-430e-8967-62c2e51acc86\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.572134 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjzp\" (UniqueName: \"kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp\") pod \"0b200855-daef-430e-8967-62c2e51acc86\" (UID: \"0b200855-daef-430e-8967-62c2e51acc86\") " Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.578285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp" (OuterVolumeSpecName: "kube-api-access-ngjzp") pod "0b200855-daef-430e-8967-62c2e51acc86" (UID: "0b200855-daef-430e-8967-62c2e51acc86"). InnerVolumeSpecName "kube-api-access-ngjzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.602691 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config" (OuterVolumeSpecName: "config") pod "0b200855-daef-430e-8967-62c2e51acc86" (UID: "0b200855-daef-430e-8967-62c2e51acc86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.603888 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b200855-daef-430e-8967-62c2e51acc86" (UID: "0b200855-daef-430e-8967-62c2e51acc86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.673907 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjzp\" (UniqueName: \"kubernetes.io/projected/0b200855-daef-430e-8967-62c2e51acc86-kube-api-access-ngjzp\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.673947 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:47 crc kubenswrapper[4780]: I0219 08:40:47.673956 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b200855-daef-430e-8967-62c2e51acc86-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.184835 4780 generic.go:334] "Generic (PLEG): container finished" podID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" containerID="dbbf16158178fa041adbf5533a0fdbe89cd97e4da8fe0830082cd1e4d2ba6f56" exitCode=0 Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.184887 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4s9pd" event={"ID":"b0f0d73a-fdfa-471b-92d4-4433cff2bda8","Type":"ContainerDied","Data":"dbbf16158178fa041adbf5533a0fdbe89cd97e4da8fe0830082cd1e4d2ba6f56"} Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.187815 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf6dk" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.188304 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf6dk" event={"ID":"0b200855-daef-430e-8967-62c2e51acc86","Type":"ContainerDied","Data":"783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3"} Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.188330 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783a73d0a5bfa930b216d2a20b0cab431168d7eaaaf390cc2013df9804ec17d3" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.473140 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:48 crc kubenswrapper[4780]: E0219 08:40:48.473586 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b200855-daef-430e-8967-62c2e51acc86" containerName="neutron-db-sync" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.473611 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b200855-daef-430e-8967-62c2e51acc86" containerName="neutron-db-sync" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.473819 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b200855-daef-430e-8967-62c2e51acc86" containerName="neutron-db-sync" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.474939 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.481905 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601353 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601386 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601538 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4lh\" (UniqueName: \"kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601676 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.601711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.694997 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.696545 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.700457 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ldrkc" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.700883 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.700908 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.701156 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703228 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703275 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703298 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4lh\" (UniqueName: \"kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703337 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.703353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.704640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.704647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.704652 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.705051 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.704649 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.710494 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.729658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4lh\" (UniqueName: \"kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh\") pod \"dnsmasq-dns-db5c97f8f-cjh9f\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.793228 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.805527 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.805824 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.805976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.806154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llx9g\" (UniqueName: \"kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.806267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.908356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.908858 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.908899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.908977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llx9g\" (UniqueName: \"kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.909001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.912996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.915022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.915231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.921280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:48 crc kubenswrapper[4780]: I0219 08:40:48.926259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llx9g\" (UniqueName: \"kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g\") pod \"neutron-55d4f7d9cb-jgtm7\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.013362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.248775 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.570264 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:40:49 crc kubenswrapper[4780]: W0219 08:40:49.657010 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod510700ac_52ab_4ff8_b2c5_61ce6b2acf0a.slice/crio-b76218eab9e7cf05d613307002a75ed8e83d06ae7bcca9d2a724eee1bba8bee6 WatchSource:0}: Error finding container b76218eab9e7cf05d613307002a75ed8e83d06ae7bcca9d2a724eee1bba8bee6: Status 404 returned error can't find the container with id b76218eab9e7cf05d613307002a75ed8e83d06ae7bcca9d2a724eee1bba8bee6 Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.660433 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.721927 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle\") pod \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.721992 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29p46\" (UniqueName: \"kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46\") pod \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.722118 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data\") pod \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\" (UID: \"b0f0d73a-fdfa-471b-92d4-4433cff2bda8\") " Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.725943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46" (OuterVolumeSpecName: "kube-api-access-29p46") pod "b0f0d73a-fdfa-471b-92d4-4433cff2bda8" (UID: "b0f0d73a-fdfa-471b-92d4-4433cff2bda8"). InnerVolumeSpecName "kube-api-access-29p46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.726200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b0f0d73a-fdfa-471b-92d4-4433cff2bda8" (UID: "b0f0d73a-fdfa-471b-92d4-4433cff2bda8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.743659 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0f0d73a-fdfa-471b-92d4-4433cff2bda8" (UID: "b0f0d73a-fdfa-471b-92d4-4433cff2bda8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.824168 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.824381 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:49 crc kubenswrapper[4780]: I0219 08:40:49.824439 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29p46\" (UniqueName: \"kubernetes.io/projected/b0f0d73a-fdfa-471b-92d4-4433cff2bda8-kube-api-access-29p46\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.220746 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerID="65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1" exitCode=0 Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.221252 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" event={"ID":"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86","Type":"ContainerDied","Data":"65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.221291 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" event={"ID":"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86","Type":"ContainerStarted","Data":"5e160f07fe6d511c5c0cd4fb4b4e2045894cbd407ac034cd5a1674b19ccadfb3"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.238349 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4s9pd" event={"ID":"b0f0d73a-fdfa-471b-92d4-4433cff2bda8","Type":"ContainerDied","Data":"00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.238397 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00296a180d89963f16236610cbb233f2b5257b3f5d4c6ff50f8a28c521d156bc" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.238400 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4s9pd" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.245257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerStarted","Data":"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.245326 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerStarted","Data":"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.245343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerStarted","Data":"b76218eab9e7cf05d613307002a75ed8e83d06ae7bcca9d2a724eee1bba8bee6"} Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.245920 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.284334 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55d4f7d9cb-jgtm7" podStartSLOduration=2.284313565 podStartE2EDuration="2.284313565s" podCreationTimestamp="2026-02-19 08:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:50.277497587 +0000 UTC m=+1193.021155036" watchObservedRunningTime="2026-02-19 08:40:50.284313565 +0000 UTC m=+1193.027971014" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.741969 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:40:50 crc kubenswrapper[4780]: E0219 08:40:50.742608 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" containerName="barbican-db-sync" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.742622 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" containerName="barbican-db-sync" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.742790 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" containerName="barbican-db-sync" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.743625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.746013 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.746240 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.750294 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6nkds" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.765605 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.824218 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.826891 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.837134 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.843169 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.847560 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.847625 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.847660 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.847699 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.847787 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wdf\" (UniqueName: \"kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.895325 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.942272 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.944044 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949223 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf72j\" (UniqueName: \"kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949295 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949334 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949366 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949399 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949516 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wdf\" (UniqueName: \"kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.949954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.956916 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.958038 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.958382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.959421 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:50 crc kubenswrapper[4780]: I0219 08:40:50.972748 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wdf\" (UniqueName: \"kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf\") pod \"barbican-worker-84f494b65f-swr5f\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.002702 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.013645 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.016544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.025758 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051022 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf72j\" (UniqueName: \"kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051192 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgcx\" (UniqueName: \"kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051326 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051378 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.051398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.054652 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.056229 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.057439 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.064764 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.077540 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf72j\" (UniqueName: \"kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j\") pod \"barbican-keystone-listener-57d747cdfb-5j92k\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.106795 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.153683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.154049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.154092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.155236 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.155304 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.155482 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156395 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgcx\" (UniqueName: \"kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156803 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dj8\" (UniqueName: \"kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156875 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156902 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.156961 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.157014 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.157069 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.157975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.158934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.159818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.177505 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgcx\" (UniqueName: \"kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx\") pod \"dnsmasq-dns-9d49dd75f-7gskl\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.231490 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.258189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.258250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.258273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dj8\" (UniqueName: \"kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.258299 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.258316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.259447 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.265104 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.265351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.266915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.269983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" event={"ID":"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86","Type":"ContainerStarted","Data":"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b"} Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.270978 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.273024 4780 generic.go:334] "Generic (PLEG): container finished" podID="358e3824-b47f-4b03-939b-de8e27734e40" containerID="510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3" exitCode=0 Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.273066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerDied","Data":"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3"} Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.273084 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"358e3824-b47f-4b03-939b-de8e27734e40","Type":"ContainerDied","Data":"eff2072c157ef0a40149bcadd00f6bbfafc45f8606b1433452515e561d5c927d"} Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.273098 4780 scope.go:117] "RemoveContainer" containerID="21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.273210 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.293904 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dj8\" (UniqueName: \"kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8\") pod \"barbican-api-7dd6cc5868-cncbp\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.300873 4780 generic.go:334] "Generic (PLEG): container finished" podID="937a02e9-aead-48c0-9c00-28a327719c18" containerID="c1e464177020365ce834cf4619a79c16908295623f830da258cffc9a74c3a004" exitCode=0 Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.301296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xdwdm" event={"ID":"937a02e9-aead-48c0-9c00-28a327719c18","Type":"ContainerDied","Data":"c1e464177020365ce834cf4619a79c16908295623f830da258cffc9a74c3a004"} Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.311960 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" podStartSLOduration=3.311940329 podStartE2EDuration="3.311940329s" podCreationTimestamp="2026-02-19 08:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:51.303320306 +0000 UTC m=+1194.046977765" watchObservedRunningTime="2026-02-19 08:40:51.311940329 +0000 UTC m=+1194.055597778" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.353182 4780 scope.go:117] "RemoveContainer" containerID="f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359578 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359621 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.359913 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnsnn\" (UniqueName: \"kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn\") pod \"358e3824-b47f-4b03-939b-de8e27734e40\" (UID: \"358e3824-b47f-4b03-939b-de8e27734e40\") " Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.361930 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.361957 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.366082 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn" (OuterVolumeSpecName: "kube-api-access-pnsnn") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "kube-api-access-pnsnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.366510 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts" (OuterVolumeSpecName: "scripts") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.377360 4780 scope.go:117] "RemoveContainer" containerID="510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.413161 4780 scope.go:117] "RemoveContainer" containerID="a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.413424 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.461693 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnsnn\" (UniqueName: \"kubernetes.io/projected/358e3824-b47f-4b03-939b-de8e27734e40-kube-api-access-pnsnn\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.461722 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.461731 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.461774 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.461783 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/358e3824-b47f-4b03-939b-de8e27734e40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.463377 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.470325 4780 scope.go:117] "RemoveContainer" containerID="21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.470578 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.470779 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba\": container with ID starting with 21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba not found: ID does not exist" containerID="21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.470824 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba"} err="failed to get container status \"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba\": rpc error: code = NotFound desc = could not find container \"21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba\": container with ID starting with 21fe2f3b69ec1ba929070e0d906eb2996064c6456e4e217394bddf4894827eba not found: ID does not exist" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.470854 4780 scope.go:117] "RemoveContainer" containerID="f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.471730 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6\": container with ID starting with f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6 not found: ID does not exist" containerID="f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.471772 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6"} err="failed to get container status \"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6\": rpc error: code = NotFound desc = could not find container \"f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6\": container with ID starting with f324deb55a190983284364799e6c52bef5c843240e85aad2f65b3982dcbe50b6 not found: ID does not exist" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.471801 4780 scope.go:117] "RemoveContainer" containerID="510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.472144 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3\": container with ID starting with 510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3 not found: ID does not exist" containerID="510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.472176 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3"} err="failed to get container status \"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3\": rpc error: code = NotFound desc = could not find container \"510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3\": container with ID starting with 510acb692f65709c43b52e20ea967b6b846657d4ec5f74619a64ab7701cef2c3 not found: ID does not exist" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.472196 4780 scope.go:117] "RemoveContainer" containerID="a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.474287 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6\": container with ID starting with a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6 not found: ID does not exist" containerID="a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.474334 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6"} err="failed to get container status \"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6\": rpc error: code = NotFound desc = could not find container \"a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6\": container with ID starting with a5e686db3b64e372c7b9c52e17d7e6bd50d15a40955244f2eebbd3221a9804f6 not found: ID does not exist" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477097 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.477505 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="sg-core" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477530 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="sg-core" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.477563 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="proxy-httpd" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477571 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="proxy-httpd" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.477600 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-central-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477609 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-central-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: E0219 08:40:51.477632 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-notification-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477640 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-notification-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477866 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="sg-core" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477905 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="proxy-httpd" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477918 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-notification-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.477940 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="358e3824-b47f-4b03-939b-de8e27734e40" containerName="ceilometer-central-agent" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.478113 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.479089 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.482381 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.483413 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.492860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.561653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data" (OuterVolumeSpecName: "config-data") pod "358e3824-b47f-4b03-939b-de8e27734e40" (UID: "358e3824-b47f-4b03-939b-de8e27734e40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.563314 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.563353 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358e3824-b47f-4b03-939b-de8e27734e40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.634944 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.668981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxztj\" (UniqueName: \"kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669074 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669286 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669422 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669445 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.669543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.670386 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.697594 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.706790 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.711262 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.712485 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.763194 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.770012 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.771033 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.771081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774458 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774554 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774599 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxztj\" (UniqueName: \"kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.774649 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.784146 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.784170 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.784300 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.784734 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.785969 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.798529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxztj\" (UniqueName: \"kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj\") pod \"neutron-f45bb7d89-m7r5b\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.820588 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.860560 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.877910 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879325 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zz48\" (UniqueName: \"kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879645 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.879724 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.947042 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358e3824-b47f-4b03-939b-de8e27734e40" path="/var/lib/kubelet/pods/358e3824-b47f-4b03-939b-de8e27734e40/volumes" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983285 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983385 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.983464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zz48\" (UniqueName: \"kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.984146 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.990423 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.991456 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:51 crc kubenswrapper[4780]: I0219 08:40:51.994844 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.008248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.011709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zz48\" (UniqueName: \"kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.012223 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts\") pod \"ceilometer-0\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " pod="openstack/ceilometer-0" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.043604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.130207 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.231787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.311650 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerStarted","Data":"a301fe0eb3978ef5d7d997f3393c426041a650d2d986d9715d453c605deab566"} Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.326608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerStarted","Data":"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59"} Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.326655 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerStarted","Data":"ea5623fd4c9be9a503f873ab5d20d911b7dcb11636b6ef8f02cf81b2a2264ef8"} Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.350701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerStarted","Data":"ceb54fb44992352c5007b4258dc19a3b58445c08efc7604ca15a130bccc89c5b"} Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.355518 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerStarted","Data":"56a86d749c67ec69279fc901c16390ce5cdc125866272f4142cfa6550688b23f"} Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.355669 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="dnsmasq-dns" containerID="cri-o://181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b" gracePeriod=10 Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.543582 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.705272 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.805894 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.806298 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvj5h\" (UniqueName: \"kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.806348 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.806392 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.806449 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.806574 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data\") pod \"937a02e9-aead-48c0-9c00-28a327719c18\" (UID: \"937a02e9-aead-48c0-9c00-28a327719c18\") " Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.814521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.819193 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h" (OuterVolumeSpecName: "kube-api-access-jvj5h") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "kube-api-access-jvj5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.823178 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts" (OuterVolumeSpecName: "scripts") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.861615 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.908332 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.908357 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvj5h\" (UniqueName: \"kubernetes.io/projected/937a02e9-aead-48c0-9c00-28a327719c18-kube-api-access-jvj5h\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.908367 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.908394 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937a02e9-aead-48c0-9c00-28a327719c18-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.921481 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:40:52 crc kubenswrapper[4780]: W0219 08:40:52.959236 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc070a55b_72c8_49f1_b459_c3c7a95cb573.slice/crio-e2a4cc22ca47b9edf6488ec0e0e8bbe6420ac0d90786b61f402c8e9d468b1078 WatchSource:0}: Error finding container e2a4cc22ca47b9edf6488ec0e0e8bbe6420ac0d90786b61f402c8e9d468b1078: Status 404 returned error can't find the container with id e2a4cc22ca47b9edf6488ec0e0e8bbe6420ac0d90786b61f402c8e9d468b1078 Feb 19 08:40:52 crc kubenswrapper[4780]: I0219 08:40:52.964359 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.000904 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data" (OuterVolumeSpecName: "config-data") pod "937a02e9-aead-48c0-9c00-28a327719c18" (UID: "937a02e9-aead-48c0-9c00-28a327719c18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.010501 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.010539 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937a02e9-aead-48c0-9c00-28a327719c18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.204329 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315300 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315622 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315699 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315764 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c4lh\" (UniqueName: \"kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315803 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.315859 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb\") pod \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\" (UID: \"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86\") " Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.319317 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh" (OuterVolumeSpecName: "kube-api-access-8c4lh") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "kube-api-access-8c4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.371062 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.377290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.381585 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerID="181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b" exitCode=0 Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.381631 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.381663 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" event={"ID":"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86","Type":"ContainerDied","Data":"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.381687 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-cjh9f" event={"ID":"9ecae888-cff4-47c4-a4ae-acf1cfb6fd86","Type":"ContainerDied","Data":"5e160f07fe6d511c5c0cd4fb4b4e2045894cbd407ac034cd5a1674b19ccadfb3"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.381702 4780 scope.go:117] "RemoveContainer" containerID="181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.383519 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerStarted","Data":"e2a4cc22ca47b9edf6488ec0e0e8bbe6420ac0d90786b61f402c8e9d468b1078"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.386240 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.389673 4780 generic.go:334] "Generic (PLEG): container finished" podID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerID="6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59" exitCode=0 Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.389749 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerDied","Data":"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.393222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerStarted","Data":"64fed94c4c48749e5472f14f55be888e45dee9103ea0cef768200af82647f22a"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.400845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerStarted","Data":"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.400892 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerStarted","Data":"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.401669 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.401689 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.410959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xdwdm" event={"ID":"937a02e9-aead-48c0-9c00-28a327719c18","Type":"ContainerDied","Data":"6b194eb5875ac17b8746fa575f9d930d3fea500ae118f8a9f778baef8e15f7ae"} Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.410996 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b194eb5875ac17b8746fa575f9d930d3fea500ae118f8a9f778baef8e15f7ae" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.410995 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xdwdm" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.417643 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.417681 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.417700 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c4lh\" (UniqueName: \"kubernetes.io/projected/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-kube-api-access-8c4lh\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.417710 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.417718 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.426211 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config" (OuterVolumeSpecName: "config") pod "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" (UID: "9ecae888-cff4-47c4-a4ae-acf1cfb6fd86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.443434 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dd6cc5868-cncbp" podStartSLOduration=3.443416491 podStartE2EDuration="3.443416491s" podCreationTimestamp="2026-02-19 08:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:53.442889548 +0000 UTC m=+1196.186547007" watchObservedRunningTime="2026-02-19 08:40:53.443416491 +0000 UTC m=+1196.187073940" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.520246 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.520275 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.560079 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:40:53 crc kubenswrapper[4780]: E0219 08:40:53.560494 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a02e9-aead-48c0-9c00-28a327719c18" containerName="cinder-db-sync" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.560515 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a02e9-aead-48c0-9c00-28a327719c18" containerName="cinder-db-sync" Feb 19 08:40:53 crc kubenswrapper[4780]: E0219 08:40:53.560527 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="dnsmasq-dns" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.560537 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="dnsmasq-dns" Feb 19 08:40:53 crc kubenswrapper[4780]: E0219 08:40:53.560570 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="init" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.560579 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="init" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.560788 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="937a02e9-aead-48c0-9c00-28a327719c18" containerName="cinder-db-sync" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.561015 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" containerName="dnsmasq-dns" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.562187 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.565982 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.566469 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fl67x" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.566553 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.566649 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.573243 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.641407 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.666600 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.670099 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.720950 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.728622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.728711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.728868 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.728917 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.729118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghs6t\" (UniqueName: \"kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.729201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.760480 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.766342 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-cjh9f"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.813816 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.815395 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.818978 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832715 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832789 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832804 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832861 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8cmm\" (UniqueName: \"kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832891 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghs6t\" (UniqueName: \"kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.832984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.833009 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.833082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.841052 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.841425 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.844003 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.851724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.860215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.864371 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghs6t\" (UniqueName: \"kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t\") pod \"cinder-scheduler-0\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.901391 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935234 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935334 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8cmm\" (UniqueName: \"kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935353 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935404 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935442 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935459 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrr2\" (UniqueName: \"kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935478 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935503 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.935522 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.936010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.936177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.936278 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.936629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.936805 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.953210 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecae888-cff4-47c4-a4ae-acf1cfb6fd86" path="/var/lib/kubelet/pods/9ecae888-cff4-47c4-a4ae-acf1cfb6fd86/volumes" Feb 19 08:40:53 crc kubenswrapper[4780]: I0219 08:40:53.954499 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8cmm\" (UniqueName: \"kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm\") pod \"dnsmasq-dns-6c8dc7b4d9-mwj9j\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037148 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrr2\" (UniqueName: \"kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037238 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037750 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.037845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.041943 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.042158 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.042558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.050529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.059101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrr2\" (UniqueName: \"kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2\") pod \"cinder-api-0\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.102369 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.127755 4780 scope.go:117] "RemoveContainer" containerID="65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.136960 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.425490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerStarted","Data":"e6ad5dd9860e6a4ae8010d509505f12ab7f5487560b9bde69360e238499f4fd4"} Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.548108 4780 scope.go:117] "RemoveContainer" containerID="181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b" Feb 19 08:40:54 crc kubenswrapper[4780]: E0219 08:40:54.548707 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b\": container with ID starting with 181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b not found: ID does not exist" containerID="181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.548752 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b"} err="failed to get container status \"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b\": rpc error: code = NotFound desc = could not find container \"181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b\": container with ID starting with 181383537d1a9335d8ef5741157e2f861fb079ba75a742766cc90cadd2389c0b not found: ID does not exist" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.548777 4780 scope.go:117] "RemoveContainer" containerID="65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1" Feb 19 08:40:54 crc kubenswrapper[4780]: E0219 08:40:54.549337 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1\": container with ID starting with 65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1 not found: ID does not exist" containerID="65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1" Feb 19 08:40:54 crc kubenswrapper[4780]: I0219 08:40:54.549365 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1"} err="failed to get container status \"65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1\": rpc error: code = NotFound desc = could not find container \"65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1\": container with ID starting with 65590831652c63948cc4dcb4a0f34a34e74ad9bad34763b3f9b050315be78bb1 not found: ID does not exist" Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.020302 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.128820 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.241098 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:40:55 crc kubenswrapper[4780]: W0219 08:40:55.247577 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7579a699_9f79_465c_8161_2cec1aca0af1.slice/crio-866da110345843fd426523c8da510bc057c394857b1bb305ec1a9ec89bd511b8 WatchSource:0}: Error finding container 866da110345843fd426523c8da510bc057c394857b1bb305ec1a9ec89bd511b8: Status 404 returned error can't find the container with id 866da110345843fd426523c8da510bc057c394857b1bb305ec1a9ec89bd511b8 Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.474312 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerStarted","Data":"50834027a5f6ebc08db8588f269364b8a11299dc154862c6b49f14f189148682"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.493106 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerStarted","Data":"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.493319 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="dnsmasq-dns" containerID="cri-o://6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e" gracePeriod=10 Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.493594 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.495538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" event={"ID":"7579a699-9f79-465c-8161-2cec1aca0af1","Type":"ContainerStarted","Data":"866da110345843fd426523c8da510bc057c394857b1bb305ec1a9ec89bd511b8"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.500423 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerStarted","Data":"0d36fd403a1d035939e04b1b25e02143c36dd932d5f72c7108d1f6415319ef45"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.501170 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.504216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerStarted","Data":"6cc78ab8f7b9e9df271b1241208a5165a0e1b133172de580b0941a07a1cbbb55"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.504247 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerStarted","Data":"33372ea022a8bd6de99a3d6f15e51d7ba430019ef7b27207983d49036151c801"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.505704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerStarted","Data":"820a4b51a53e043440bcaf16b4ad384dc559702ac84d137b9ed7776ec4c5362a"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.508245 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerStarted","Data":"1c285c47625009451b67e9b654cc339fc00b6bd0729bb0cdf614b340e8833ae2"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.518408 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerStarted","Data":"cf73772a4d01bf87fe0b1f3121d3412df9da363dc17c7d7d04a7882814ffc9ad"} Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.528935 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" podStartSLOduration=5.528909368 podStartE2EDuration="5.528909368s" podCreationTimestamp="2026-02-19 08:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:55.518319606 +0000 UTC m=+1198.261977065" watchObservedRunningTime="2026-02-19 08:40:55.528909368 +0000 UTC m=+1198.272566817" Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.566754 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84f494b65f-swr5f" podStartSLOduration=2.497117066 podStartE2EDuration="5.566731973s" podCreationTimestamp="2026-02-19 08:40:50 +0000 UTC" firstStartedPulling="2026-02-19 08:40:51.769367383 +0000 UTC m=+1194.513024832" lastFinishedPulling="2026-02-19 08:40:54.83898229 +0000 UTC m=+1197.582639739" observedRunningTime="2026-02-19 08:40:55.546853402 +0000 UTC m=+1198.290510841" watchObservedRunningTime="2026-02-19 08:40:55.566731973 +0000 UTC m=+1198.310389422" Feb 19 08:40:55 crc kubenswrapper[4780]: I0219 08:40:55.623824 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f45bb7d89-m7r5b" podStartSLOduration=4.623803853 podStartE2EDuration="4.623803853s" podCreationTimestamp="2026-02-19 08:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:55.574655599 +0000 UTC m=+1198.318313048" watchObservedRunningTime="2026-02-19 08:40:55.623803853 +0000 UTC m=+1198.367461302" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.188362 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220642 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgcx\" (UniqueName: \"kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220733 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220918 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.220945 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb\") pod \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\" (UID: \"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e\") " Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.280567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx" (OuterVolumeSpecName: "kube-api-access-4lgcx") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "kube-api-access-4lgcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.325726 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgcx\" (UniqueName: \"kubernetes.io/projected/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-kube-api-access-4lgcx\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.408362 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.423405 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.425707 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.427943 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.427965 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.427976 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.433605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.444219 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config" (OuterVolumeSpecName: "config") pod "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" (UID: "cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.534656 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.536274 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.536101 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" event={"ID":"7579a699-9f79-465c-8161-2cec1aca0af1","Type":"ContainerDied","Data":"e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.536078 4780 generic.go:334] "Generic (PLEG): container finished" podID="7579a699-9f79-465c-8161-2cec1aca0af1" containerID="e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a" exitCode=0 Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.550851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerStarted","Data":"7edbc265ca1fca9fec89c4aa613f291e13e679212a1411e4d048f4165e32dd71"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.573114 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerStarted","Data":"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.575484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerStarted","Data":"70407c617787a6edf2338681cbaa820a5dc2453f03dc45619e85306d7403b561"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.579355 4780 generic.go:334] "Generic (PLEG): container finished" podID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerID="6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e" exitCode=0 Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.579892 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerDied","Data":"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.579957 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" event={"ID":"cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e","Type":"ContainerDied","Data":"ea5623fd4c9be9a503f873ab5d20d911b7dcb11636b6ef8f02cf81b2a2264ef8"} Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.579976 4780 scope.go:117] "RemoveContainer" containerID="6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.580150 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-7gskl" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.658834 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" podStartSLOduration=3.970169637 podStartE2EDuration="6.65881665s" podCreationTimestamp="2026-02-19 08:40:50 +0000 UTC" firstStartedPulling="2026-02-19 08:40:51.885951873 +0000 UTC m=+1194.629609322" lastFinishedPulling="2026-02-19 08:40:54.574598876 +0000 UTC m=+1197.318256335" observedRunningTime="2026-02-19 08:40:56.589418845 +0000 UTC m=+1199.333076294" watchObservedRunningTime="2026-02-19 08:40:56.65881665 +0000 UTC m=+1199.402474099" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.666440 4780 scope.go:117] "RemoveContainer" containerID="6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.670204 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.679273 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-7gskl"] Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.714698 4780 scope.go:117] "RemoveContainer" containerID="6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e" Feb 19 08:40:56 crc kubenswrapper[4780]: E0219 08:40:56.715144 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e\": container with ID starting with 6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e not found: ID does not exist" containerID="6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.715187 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e"} err="failed to get container status \"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e\": rpc error: code = NotFound desc = could not find container \"6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e\": container with ID starting with 6145a272867366f4ccaba60cf5a39606f229e2651aafd7d4decdad5a37d92c6e not found: ID does not exist" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.715239 4780 scope.go:117] "RemoveContainer" containerID="6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59" Feb 19 08:40:56 crc kubenswrapper[4780]: E0219 08:40:56.716734 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59\": container with ID starting with 6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59 not found: ID does not exist" containerID="6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59" Feb 19 08:40:56 crc kubenswrapper[4780]: I0219 08:40:56.716776 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59"} err="failed to get container status \"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59\": rpc error: code = NotFound desc = could not find container \"6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59\": container with ID starting with 6ed21987858eee932b86bf5e97ec5d5d919e9d8c17fb4f0ad935a324f0fe2c59 not found: ID does not exist" Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.611325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerStarted","Data":"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f"} Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.612870 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.628352 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerStarted","Data":"677208c92ad9b8a9eac0caac53164594983d51933b2cff6842e9ec4c7d43f72b"} Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.639879 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerStarted","Data":"3d7c2ced2dd4d670d36bf220f89cfa44056436ffda36738dc8575db23b6e0214"} Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.651488 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.6514637 podStartE2EDuration="4.6514637s" podCreationTimestamp="2026-02-19 08:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:57.638259244 +0000 UTC m=+1200.381916693" watchObservedRunningTime="2026-02-19 08:40:57.6514637 +0000 UTC m=+1200.395121149" Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.655151 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" event={"ID":"7579a699-9f79-465c-8161-2cec1aca0af1","Type":"ContainerStarted","Data":"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529"} Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.656258 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:40:57 crc kubenswrapper[4780]: I0219 08:40:57.686021 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" podStartSLOduration=4.686006874 podStartE2EDuration="4.686006874s" podCreationTimestamp="2026-02-19 08:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:40:57.685616694 +0000 UTC m=+1200.429274133" watchObservedRunningTime="2026-02-19 08:40:57.686006874 +0000 UTC m=+1200.429664313" Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.063534 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" path="/var/lib/kubelet/pods/cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e/volumes" Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.676868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerStarted","Data":"fda8a3c991411d59a2c8372be2edfc118e891bb7f6cbd411491c03df8b3d4fa6"} Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.677314 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.688749 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerStarted","Data":"aa7c8b74ba07b82fb7c0d7715c79991f2aad9233af7182e4e21e4173af7d9c54"} Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.719964 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.744477221 podStartE2EDuration="7.719938934s" podCreationTimestamp="2026-02-19 08:40:51 +0000 UTC" firstStartedPulling="2026-02-19 08:40:52.961944313 +0000 UTC m=+1195.705601762" lastFinishedPulling="2026-02-19 08:40:57.937406026 +0000 UTC m=+1200.681063475" observedRunningTime="2026-02-19 08:40:58.712428769 +0000 UTC m=+1201.456086228" watchObservedRunningTime="2026-02-19 08:40:58.719938934 +0000 UTC m=+1201.463596383" Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.761801 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.4288429990000004 podStartE2EDuration="5.761778668s" podCreationTimestamp="2026-02-19 08:40:53 +0000 UTC" firstStartedPulling="2026-02-19 08:40:55.058415382 +0000 UTC m=+1197.802072831" lastFinishedPulling="2026-02-19 08:40:56.391351051 +0000 UTC m=+1199.135008500" observedRunningTime="2026-02-19 08:40:58.75416285 +0000 UTC m=+1201.497820309" watchObservedRunningTime="2026-02-19 08:40:58.761778668 +0000 UTC m=+1201.505436117" Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.765993 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:40:58 crc kubenswrapper[4780]: I0219 08:40:58.901915 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.388207 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:40:59 crc kubenswrapper[4780]: E0219 08:40:59.388929 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="init" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.388949 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="init" Feb 19 08:40:59 crc kubenswrapper[4780]: E0219 08:40:59.388962 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="dnsmasq-dns" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.388970 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="dnsmasq-dns" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.389198 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8c1c5d-450b-4bb5-88d3-d0ccbf0caf2e" containerName="dnsmasq-dns" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.390297 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.393701 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.394807 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.408701 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.413987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzh6\" (UniqueName: \"kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414566 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414832 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.414952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516390 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516529 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzh6\" (UniqueName: \"kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516583 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.516630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.517302 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.525199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.525526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.525627 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.528600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.528614 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.540230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzh6\" (UniqueName: \"kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6\") pod \"barbican-api-6f57f4f6f6-8lqlt\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.712307 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:40:59 crc kubenswrapper[4780]: I0219 08:40:59.972896 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7dd6cc5868-cncbp" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:41:00 crc kubenswrapper[4780]: W0219 08:41:00.216396 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef67457_e347_4ea9_b488_32b52af9146c.slice/crio-5b81cf03233af003a848b04e070d14fda31b593a206c757e3ba3e3d686e8c95f WatchSource:0}: Error finding container 5b81cf03233af003a848b04e070d14fda31b593a206c757e3ba3e3d686e8c95f: Status 404 returned error can't find the container with id 5b81cf03233af003a848b04e070d14fda31b593a206c757e3ba3e3d686e8c95f Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.219512 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.702562 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api-log" containerID="cri-o://d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" gracePeriod=30 Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.703091 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerStarted","Data":"516c8de3c33c0337fd76fb32a1510070e91e8a75deedbf4866e716ba08c4c8aa"} Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.703611 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerStarted","Data":"53eecf6f3abbe44f7e06ac0af7e4deebaf1979eb160ea1159e05a543ac4aea01"} Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.704370 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerStarted","Data":"5b81cf03233af003a848b04e070d14fda31b593a206c757e3ba3e3d686e8c95f"} Feb 19 08:41:00 crc kubenswrapper[4780]: I0219 08:41:00.703281 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api" containerID="cri-o://7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" gracePeriod=30 Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.299887 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.330974 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" podStartSLOduration=2.330946926 podStartE2EDuration="2.330946926s" podCreationTimestamp="2026-02-19 08:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:00.741842728 +0000 UTC m=+1203.485500177" watchObservedRunningTime="2026-02-19 08:41:01.330946926 +0000 UTC m=+1204.074604415" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346087 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346152 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346205 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346339 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346484 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrr2\" (UniqueName: \"kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2\") pod \"688c4041-1183-459c-9cdb-3535ba15fea7\" (UID: \"688c4041-1183-459c-9cdb-3535ba15fea7\") " Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346726 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs" (OuterVolumeSpecName: "logs") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346837 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/688c4041-1183-459c-9cdb-3535ba15fea7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.346848 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688c4041-1183-459c-9cdb-3535ba15fea7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.351278 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts" (OuterVolumeSpecName: "scripts") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.351883 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2" (OuterVolumeSpecName: "kube-api-access-xdrr2") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "kube-api-access-xdrr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.355937 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.369723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.411818 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data" (OuterVolumeSpecName: "config-data") pod "688c4041-1183-459c-9cdb-3535ba15fea7" (UID: "688c4041-1183-459c-9cdb-3535ba15fea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.448314 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.448350 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.448364 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrr2\" (UniqueName: \"kubernetes.io/projected/688c4041-1183-459c-9cdb-3535ba15fea7-kube-api-access-xdrr2\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.448377 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.448389 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688c4041-1183-459c-9cdb-3535ba15fea7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713384 4780 generic.go:334] "Generic (PLEG): container finished" podID="688c4041-1183-459c-9cdb-3535ba15fea7" containerID="7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" exitCode=0 Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713422 4780 generic.go:334] "Generic (PLEG): container finished" podID="688c4041-1183-459c-9cdb-3535ba15fea7" containerID="d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" exitCode=143 Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713466 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerDied","Data":"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f"} Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713550 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerDied","Data":"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471"} Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"688c4041-1183-459c-9cdb-3535ba15fea7","Type":"ContainerDied","Data":"820a4b51a53e043440bcaf16b4ad384dc559702ac84d137b9ed7776ec4c5362a"} Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.713580 4780 scope.go:117] "RemoveContainer" containerID="7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.714181 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.714216 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.749249 4780 scope.go:117] "RemoveContainer" containerID="d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.788173 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.792240 4780 scope.go:117] "RemoveContainer" containerID="7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" Feb 19 08:41:01 crc kubenswrapper[4780]: E0219 08:41:01.796230 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f\": container with ID starting with 7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f not found: ID does not exist" containerID="7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.796267 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f"} err="failed to get container status \"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f\": rpc error: code = NotFound desc = could not find container \"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f\": container with ID starting with 7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f not found: ID does not exist" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.796294 4780 scope.go:117] "RemoveContainer" containerID="d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" Feb 19 08:41:01 crc kubenswrapper[4780]: E0219 08:41:01.800177 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471\": container with ID starting with d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471 not found: ID does not exist" containerID="d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.800197 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471"} err="failed to get container status \"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471\": rpc error: code = NotFound desc = could not find container \"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471\": container with ID starting with d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471 not found: ID does not exist" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.800209 4780 scope.go:117] "RemoveContainer" containerID="7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.804261 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f"} err="failed to get container status \"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f\": rpc error: code = NotFound desc = could not find container \"7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f\": container with ID starting with 7af097360836fcdbc03d75604741bcd7f7a53ed77f539c98c1c3ece8153a5a3f not found: ID does not exist" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.804286 4780 scope.go:117] "RemoveContainer" containerID="d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.804913 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471"} err="failed to get container status \"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471\": rpc error: code = NotFound desc = could not find container \"d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471\": container with ID starting with d9d70bf391e68df5b9e2dc048a7ee9b803fe77c65a2af8988cf27af80d880471 not found: ID does not exist" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.810215 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.821891 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:41:01 crc kubenswrapper[4780]: E0219 08:41:01.822307 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api-log" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.822326 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api-log" Feb 19 08:41:01 crc kubenswrapper[4780]: E0219 08:41:01.822369 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.822378 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.822595 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.822635 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" containerName="cinder-api-log" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.823956 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.826589 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.827029 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.827361 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.843742 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861488 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861574 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861610 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbcc\" (UniqueName: \"kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861634 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861670 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.861771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.959006 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688c4041-1183-459c-9cdb-3535ba15fea7" path="/var/lib/kubelet/pods/688c4041-1183-459c-9cdb-3535ba15fea7/volumes" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.962995 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbbcc\" (UniqueName: \"kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.963862 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.964480 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.967518 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.968160 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.970629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.972643 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.972728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.975635 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:01 crc kubenswrapper[4780]: I0219 08:41:01.983885 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbbcc\" (UniqueName: \"kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc\") pod \"cinder-api-0\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " pod="openstack/cinder-api-0" Feb 19 08:41:02 crc kubenswrapper[4780]: I0219 08:41:02.150984 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:41:02 crc kubenswrapper[4780]: I0219 08:41:02.663783 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:41:02 crc kubenswrapper[4780]: W0219 08:41:02.672005 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041edb21_581b_493e_a2f1_09e0b3559df1.slice/crio-4cab19d07a29637de3732196fc0dfabff1657564c847d3f62e1dec3bafb7095a WatchSource:0}: Error finding container 4cab19d07a29637de3732196fc0dfabff1657564c847d3f62e1dec3bafb7095a: Status 404 returned error can't find the container with id 4cab19d07a29637de3732196fc0dfabff1657564c847d3f62e1dec3bafb7095a Feb 19 08:41:02 crc kubenswrapper[4780]: I0219 08:41:02.750245 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerStarted","Data":"4cab19d07a29637de3732196fc0dfabff1657564c847d3f62e1dec3bafb7095a"} Feb 19 08:41:03 crc kubenswrapper[4780]: I0219 08:41:03.027650 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:41:03 crc kubenswrapper[4780]: I0219 08:41:03.427513 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:41:03 crc kubenswrapper[4780]: I0219 08:41:03.815032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerStarted","Data":"9b449827662b36d6c9c93dd1e51b9613703943aec7408ea0336acf021bd59ad8"} Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.104304 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.110782 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.191721 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.191972 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67dccc895-k8fms" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="dnsmasq-dns" containerID="cri-o://c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0" gracePeriod=10 Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.200474 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.716575 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.824679 4780 generic.go:334] "Generic (PLEG): container finished" podID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerID="c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0" exitCode=0 Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.824747 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-k8fms" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.824781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-k8fms" event={"ID":"c93d79e3-0727-4aef-b65d-98d315a9957b","Type":"ContainerDied","Data":"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0"} Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.824841 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-k8fms" event={"ID":"c93d79e3-0727-4aef-b65d-98d315a9957b","Type":"ContainerDied","Data":"a4b9410d9beaf4bbb7ecea5bca5cd07bc67fa1b51615681f16c3a70dac27f5f7"} Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.824866 4780 scope.go:117] "RemoveContainer" containerID="c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.827739 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerStarted","Data":"7b196fe6ed67411ad242dd795c03caa7fd2feb9dbef00ff8b65a8ef8e03b4da0"} Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.827921 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="cinder-scheduler" containerID="cri-o://3d7c2ced2dd4d670d36bf220f89cfa44056436ffda36738dc8575db23b6e0214" gracePeriod=30 Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.828386 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="probe" containerID="cri-o://aa7c8b74ba07b82fb7c0d7715c79991f2aad9233af7182e4e21e4173af7d9c54" gracePeriod=30 Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833690 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833737 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833775 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833808 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833898 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.833965 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87znl\" (UniqueName: \"kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl\") pod \"c93d79e3-0727-4aef-b65d-98d315a9957b\" (UID: \"c93d79e3-0727-4aef-b65d-98d315a9957b\") " Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.842620 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl" (OuterVolumeSpecName: "kube-api-access-87znl") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "kube-api-access-87znl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.860303 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.860288231 podStartE2EDuration="3.860288231s" podCreationTimestamp="2026-02-19 08:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:04.860117447 +0000 UTC m=+1207.603774896" watchObservedRunningTime="2026-02-19 08:41:04.860288231 +0000 UTC m=+1207.603945680" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.864298 4780 scope.go:117] "RemoveContainer" containerID="89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.894741 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.901469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config" (OuterVolumeSpecName: "config") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.908437 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.920642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.928839 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c93d79e3-0727-4aef-b65d-98d315a9957b" (UID: "c93d79e3-0727-4aef-b65d-98d315a9957b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936329 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936370 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936382 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87znl\" (UniqueName: \"kubernetes.io/projected/c93d79e3-0727-4aef-b65d-98d315a9957b-kube-api-access-87znl\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936394 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936405 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.936416 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93d79e3-0727-4aef-b65d-98d315a9957b-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.946390 4780 scope.go:117] "RemoveContainer" containerID="c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0" Feb 19 08:41:04 crc kubenswrapper[4780]: E0219 08:41:04.946872 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0\": container with ID starting with c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0 not found: ID does not exist" containerID="c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.946901 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0"} err="failed to get container status \"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0\": rpc error: code = NotFound desc = could not find container \"c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0\": container with ID starting with c6cbd10685274dfb878593d23d317ac9b9b359d654e264f6188a377755d641d0 not found: ID does not exist" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.946921 4780 scope.go:117] "RemoveContainer" containerID="89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100" Feb 19 08:41:04 crc kubenswrapper[4780]: E0219 08:41:04.947365 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100\": container with ID starting with 89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100 not found: ID does not exist" containerID="89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100" Feb 19 08:41:04 crc kubenswrapper[4780]: I0219 08:41:04.947428 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100"} err="failed to get container status \"89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100\": rpc error: code = NotFound desc = could not find container \"89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100\": container with ID starting with 89ff16b289ff041e945fdb0edcfca9cd3dd24661cf95c3e3b1fbbe924952f100 not found: ID does not exist" Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.162258 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.170503 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-k8fms"] Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.868971 4780 generic.go:334] "Generic (PLEG): container finished" podID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerID="aa7c8b74ba07b82fb7c0d7715c79991f2aad9233af7182e4e21e4173af7d9c54" exitCode=0 Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.869991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerDied","Data":"aa7c8b74ba07b82fb7c0d7715c79991f2aad9233af7182e4e21e4173af7d9c54"} Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.870019 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 08:41:05 crc kubenswrapper[4780]: I0219 08:41:05.948415 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" path="/var/lib/kubelet/pods/c93d79e3-0727-4aef-b65d-98d315a9957b/volumes" Feb 19 08:41:07 crc kubenswrapper[4780]: I0219 08:41:07.630602 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.019591 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.529335 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 08:41:08 crc kubenswrapper[4780]: E0219 08:41:08.529793 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="dnsmasq-dns" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.529817 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="dnsmasq-dns" Feb 19 08:41:08 crc kubenswrapper[4780]: E0219 08:41:08.529854 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="init" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.529862 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="init" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.530096 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="dnsmasq-dns" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.530903 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.533396 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6c94w" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.533969 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.535594 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.540084 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.601469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.601543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.601708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4zj\" (UniqueName: \"kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.601819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.703453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.703534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4zj\" (UniqueName: \"kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.703591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.703651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.704759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.708492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.713638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.722926 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4zj\" (UniqueName: \"kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj\") pod \"openstackclient\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.849539 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.940390 4780 generic.go:334] "Generic (PLEG): container finished" podID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerID="3d7c2ced2dd4d670d36bf220f89cfa44056436ffda36738dc8575db23b6e0214" exitCode=0 Feb 19 08:41:08 crc kubenswrapper[4780]: I0219 08:41:08.940622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerDied","Data":"3d7c2ced2dd4d670d36bf220f89cfa44056436ffda36738dc8575db23b6e0214"} Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.409078 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:41:09 crc kubenswrapper[4780]: W0219 08:41:09.438538 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc29e551_efab_43d8_94d5_1c515a76dca9.slice/crio-ce2e495454ac8fd45bc58587f00aba8a1cd52f8622230ff341f7e2b4be77b7b1 WatchSource:0}: Error finding container ce2e495454ac8fd45bc58587f00aba8a1cd52f8622230ff341f7e2b4be77b7b1: Status 404 returned error can't find the container with id ce2e495454ac8fd45bc58587f00aba8a1cd52f8622230ff341f7e2b4be77b7b1 Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.446384 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533763 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghs6t\" (UniqueName: \"kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533930 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.533956 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom\") pod \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\" (UID: \"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361\") " Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.535116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.540434 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t" (OuterVolumeSpecName: "kube-api-access-ghs6t") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "kube-api-access-ghs6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.543792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts" (OuterVolumeSpecName: "scripts") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.545200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.602544 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67dccc895-k8fms" podUID="c93d79e3-0727-4aef-b65d-98d315a9957b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.610380 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.636930 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.636973 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.636989 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghs6t\" (UniqueName: \"kubernetes.io/projected/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-kube-api-access-ghs6t\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.637003 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.637015 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.638430 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data" (OuterVolumeSpecName: "config-data") pod "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" (UID: "b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.738402 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:09 crc kubenswrapper[4780]: I0219 08:41:09.979964 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc29e551-efab-43d8-94d5-1c515a76dca9","Type":"ContainerStarted","Data":"ce2e495454ac8fd45bc58587f00aba8a1cd52f8622230ff341f7e2b4be77b7b1"} Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.009069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361","Type":"ContainerDied","Data":"50834027a5f6ebc08db8588f269364b8a11299dc154862c6b49f14f189148682"} Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.009115 4780 scope.go:117] "RemoveContainer" containerID="aa7c8b74ba07b82fb7c0d7715c79991f2aad9233af7182e4e21e4173af7d9c54" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.009327 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.062588 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.070246 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.081302 4780 scope.go:117] "RemoveContainer" containerID="3d7c2ced2dd4d670d36bf220f89cfa44056436ffda36738dc8575db23b6e0214" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.081482 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:10 crc kubenswrapper[4780]: E0219 08:41:10.081885 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="cinder-scheduler" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.081952 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="cinder-scheduler" Feb 19 08:41:10 crc kubenswrapper[4780]: E0219 08:41:10.082013 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="probe" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.082060 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="probe" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.082282 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="probe" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.082349 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" containerName="cinder-scheduler" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.083273 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.087466 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.090763 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.144946 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.145400 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.145493 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.145586 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.145683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jzf\" (UniqueName: \"kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.145767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247281 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5jzf\" (UniqueName: \"kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247386 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.247865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.251811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.251999 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.252839 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.255839 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.267087 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5jzf\" (UniqueName: \"kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf\") pod \"cinder-scheduler-0\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " pod="openstack/cinder-scheduler-0" Feb 19 08:41:10 crc kubenswrapper[4780]: I0219 08:41:10.410787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:41:11 crc kubenswrapper[4780]: I0219 08:41:11.043981 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:41:11 crc kubenswrapper[4780]: I0219 08:41:11.737684 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:41:11 crc kubenswrapper[4780]: I0219 08:41:11.951018 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361" path="/var/lib/kubelet/pods/b431f3d8-8fd0-4b0f-a0b7-281aaaa8d361/volumes" Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.001821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.060889 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.061109 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd6cc5868-cncbp" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api-log" containerID="cri-o://d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414" gracePeriod=30 Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.061523 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd6cc5868-cncbp" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api" containerID="cri-o://497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa" gracePeriod=30 Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.097212 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerStarted","Data":"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e"} Feb 19 08:41:12 crc kubenswrapper[4780]: I0219 08:41:12.097257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerStarted","Data":"129373fb6224c5c1717d2e3db01c6e03e85b15185ba660814ed94ef67b5055fa"} Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.113943 4780 generic.go:334] "Generic (PLEG): container finished" podID="15014e61-c296-45be-b4f2-a7577a276925" containerID="d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414" exitCode=143 Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.114023 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerDied","Data":"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414"} Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.118389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerStarted","Data":"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5"} Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.149083 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.149064619 podStartE2EDuration="3.149064619s" podCreationTimestamp="2026-02-19 08:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:13.148001193 +0000 UTC m=+1215.891658642" watchObservedRunningTime="2026-02-19 08:41:13.149064619 +0000 UTC m=+1215.892722068" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.761245 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.764041 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.777284 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.777309 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.777529 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.786076 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwglk\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930221 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930247 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930345 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930394 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:13 crc kubenswrapper[4780]: I0219 08:41:13.930491 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.031847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwglk\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032340 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032372 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032416 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032503 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032499 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.032536 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.033459 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.037976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.038112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.044833 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.047222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.048631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.050763 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwglk\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk\") pod \"swift-proxy-565f58cc6f-vwtvf\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.095413 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:14 crc kubenswrapper[4780]: I0219 08:41:14.771575 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.050855 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.051149 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-central-agent" containerID="cri-o://1c285c47625009451b67e9b654cc339fc00b6bd0729bb0cdf614b340e8833ae2" gracePeriod=30 Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.051979 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" containerID="cri-o://fda8a3c991411d59a2c8372be2edfc118e891bb7f6cbd411491c03df8b3d4fa6" gracePeriod=30 Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.052207 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-notification-agent" containerID="cri-o://70407c617787a6edf2338681cbaa820a5dc2453f03dc45619e85306d7403b561" gracePeriod=30 Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.052287 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="sg-core" containerID="cri-o://677208c92ad9b8a9eac0caac53164594983d51933b2cff6842e9ec4c7d43f72b" gracePeriod=30 Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.064352 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.158:3000/\": EOF" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.250571 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:41:15 crc kubenswrapper[4780]: W0219 08:41:15.312828 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ef81227_694a_4bad_b32b_809d351ec668.slice/crio-41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d WatchSource:0}: Error finding container 41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d: Status 404 returned error can't find the container with id 41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.411227 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.673931 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.765863 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle\") pod \"15014e61-c296-45be-b4f2-a7577a276925\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.765929 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom\") pod \"15014e61-c296-45be-b4f2-a7577a276925\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.765978 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs\") pod \"15014e61-c296-45be-b4f2-a7577a276925\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.765999 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data\") pod \"15014e61-c296-45be-b4f2-a7577a276925\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.766024 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8dj8\" (UniqueName: \"kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8\") pod \"15014e61-c296-45be-b4f2-a7577a276925\" (UID: \"15014e61-c296-45be-b4f2-a7577a276925\") " Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.767200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs" (OuterVolumeSpecName: "logs") pod "15014e61-c296-45be-b4f2-a7577a276925" (UID: "15014e61-c296-45be-b4f2-a7577a276925"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.771344 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15014e61-c296-45be-b4f2-a7577a276925" (UID: "15014e61-c296-45be-b4f2-a7577a276925"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.771501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8" (OuterVolumeSpecName: "kube-api-access-k8dj8") pod "15014e61-c296-45be-b4f2-a7577a276925" (UID: "15014e61-c296-45be-b4f2-a7577a276925"). InnerVolumeSpecName "kube-api-access-k8dj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.822043 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15014e61-c296-45be-b4f2-a7577a276925" (UID: "15014e61-c296-45be-b4f2-a7577a276925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.833332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data" (OuterVolumeSpecName: "config-data") pod "15014e61-c296-45be-b4f2-a7577a276925" (UID: "15014e61-c296-45be-b4f2-a7577a276925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.871034 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8dj8\" (UniqueName: \"kubernetes.io/projected/15014e61-c296-45be-b4f2-a7577a276925-kube-api-access-k8dj8\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.871066 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.871075 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.871084 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15014e61-c296-45be-b4f2-a7577a276925-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:15 crc kubenswrapper[4780]: I0219 08:41:15.871092 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15014e61-c296-45be-b4f2-a7577a276925-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:16 crc kubenswrapper[4780]: E0219 08:41:16.105221 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15014e61_c296_45be_b4f2_a7577a276925.slice/crio-56a86d749c67ec69279fc901c16390ce5cdc125866272f4142cfa6550688b23f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15014e61_c296_45be_b4f2_a7577a276925.slice\": RecentStats: unable to find data in memory cache]" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.152057 4780 generic.go:334] "Generic (PLEG): container finished" podID="15014e61-c296-45be-b4f2-a7577a276925" containerID="497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa" exitCode=0 Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.152164 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerDied","Data":"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.152197 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd6cc5868-cncbp" event={"ID":"15014e61-c296-45be-b4f2-a7577a276925","Type":"ContainerDied","Data":"56a86d749c67ec69279fc901c16390ce5cdc125866272f4142cfa6550688b23f"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.152218 4780 scope.go:117] "RemoveContainer" containerID="497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.152349 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd6cc5868-cncbp" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.157367 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerStarted","Data":"a452ac5cf46573cac8666add1030829b2829b3e8372bbc020c65c25f15121df8"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.157409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerStarted","Data":"b2d583309f49dd49d17c80f25a85efc092769c4ff96637acdd9473aa868d7556"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.157420 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerStarted","Data":"41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.158203 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.158282 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.182669 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191110 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dd6cc5868-cncbp"] Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191236 4780 generic.go:334] "Generic (PLEG): container finished" podID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerID="fda8a3c991411d59a2c8372be2edfc118e891bb7f6cbd411491c03df8b3d4fa6" exitCode=0 Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191260 4780 generic.go:334] "Generic (PLEG): container finished" podID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerID="677208c92ad9b8a9eac0caac53164594983d51933b2cff6842e9ec4c7d43f72b" exitCode=2 Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191270 4780 generic.go:334] "Generic (PLEG): container finished" podID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerID="1c285c47625009451b67e9b654cc339fc00b6bd0729bb0cdf614b340e8833ae2" exitCode=0 Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191292 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerDied","Data":"fda8a3c991411d59a2c8372be2edfc118e891bb7f6cbd411491c03df8b3d4fa6"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191317 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerDied","Data":"677208c92ad9b8a9eac0caac53164594983d51933b2cff6842e9ec4c7d43f72b"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.191327 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerDied","Data":"1c285c47625009451b67e9b654cc339fc00b6bd0729bb0cdf614b340e8833ae2"} Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.204796 4780 scope.go:117] "RemoveContainer" containerID="d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.237086 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-565f58cc6f-vwtvf" podStartSLOduration=3.237066428 podStartE2EDuration="3.237066428s" podCreationTimestamp="2026-02-19 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:16.206872682 +0000 UTC m=+1218.950530131" watchObservedRunningTime="2026-02-19 08:41:16.237066428 +0000 UTC m=+1218.980723877" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.250748 4780 scope.go:117] "RemoveContainer" containerID="497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa" Feb 19 08:41:16 crc kubenswrapper[4780]: E0219 08:41:16.253971 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa\": container with ID starting with 497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa not found: ID does not exist" containerID="497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.254080 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa"} err="failed to get container status \"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa\": rpc error: code = NotFound desc = could not find container \"497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa\": container with ID starting with 497d3a4c243e20c933700bd42144bfae0cdc70705df6355dc3509ab0a21674fa not found: ID does not exist" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.254109 4780 scope.go:117] "RemoveContainer" containerID="d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414" Feb 19 08:41:16 crc kubenswrapper[4780]: E0219 08:41:16.255289 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414\": container with ID starting with d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414 not found: ID does not exist" containerID="d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414" Feb 19 08:41:16 crc kubenswrapper[4780]: I0219 08:41:16.255329 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414"} err="failed to get container status \"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414\": rpc error: code = NotFound desc = could not find container \"d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414\": container with ID starting with d596e4e7e2a6097d997a67f2943e813568818db8d137cd0671162cbbec3a4414 not found: ID does not exist" Feb 19 08:41:17 crc kubenswrapper[4780]: I0219 08:41:17.947932 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15014e61-c296-45be-b4f2-a7577a276925" path="/var/lib/kubelet/pods/15014e61-c296-45be-b4f2-a7577a276925/volumes" Feb 19 08:41:19 crc kubenswrapper[4780]: I0219 08:41:19.020942 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:41:19 crc kubenswrapper[4780]: I0219 08:41:19.230799 4780 generic.go:334] "Generic (PLEG): container finished" podID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerID="70407c617787a6edf2338681cbaa820a5dc2453f03dc45619e85306d7403b561" exitCode=0 Feb 19 08:41:19 crc kubenswrapper[4780]: I0219 08:41:19.230836 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerDied","Data":"70407c617787a6edf2338681cbaa820a5dc2453f03dc45619e85306d7403b561"} Feb 19 08:41:20 crc kubenswrapper[4780]: I0219 08:41:20.622284 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.622230 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-w4nft"] Feb 19 08:41:21 crc kubenswrapper[4780]: E0219 08:41:21.622915 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api-log" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.622934 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api-log" Feb 19 08:41:21 crc kubenswrapper[4780]: E0219 08:41:21.622975 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.622993 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.623183 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api-log" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.623199 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="15014e61-c296-45be-b4f2-a7577a276925" containerName="barbican-api" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.623826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.647734 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w4nft"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.689622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.689708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xpj\" (UniqueName: \"kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.724180 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kd78h"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.725263 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.732335 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kd78h"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.802830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvqj\" (UniqueName: \"kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.802982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.803084 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.803115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xpj\" (UniqueName: \"kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.804221 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.832321 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9c29-account-create-update-hcthl"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.833750 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.835815 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.839391 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xpj\" (UniqueName: \"kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj\") pod \"nova-api-db-create-w4nft\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.840771 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9c29-account-create-update-hcthl"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.858045 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.908787 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhb8\" (UniqueName: \"kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.908835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.908924 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvqj\" (UniqueName: \"kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.909065 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.909824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.937963 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.938238 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55d4f7d9cb-jgtm7" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-api" containerID="cri-o://4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731" gracePeriod=30 Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.938865 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55d4f7d9cb-jgtm7" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-httpd" containerID="cri-o://ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973" gracePeriod=30 Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.947934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvqj\" (UniqueName: \"kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj\") pod \"nova-cell0-db-create-kd78h\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.951423 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.976861 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4cjwm"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.978507 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4cjwm"] Feb 19 08:41:21 crc kubenswrapper[4780]: I0219 08:41:21.978624 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.010460 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.010530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.010548 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjhb8\" (UniqueName: \"kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.010604 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm84l\" (UniqueName: \"kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.014914 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.039394 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-623f-account-create-update-f2ks8"] Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.042457 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.048761 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.059223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-f2ks8"] Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.061937 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjhb8\" (UniqueName: \"kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8\") pod \"nova-api-9c29-account-create-update-hcthl\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.108197 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.112492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.112549 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66qp\" (UniqueName: \"kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.112652 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.112805 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm84l\" (UniqueName: \"kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.113664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.147796 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm84l\" (UniqueName: \"kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l\") pod \"nova-cell1-db-create-4cjwm\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.215071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.215405 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66qp\" (UniqueName: \"kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.215829 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.216152 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.232866 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-jzdls"] Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.232921 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.158:3000/\": dial tcp 10.217.0.158:3000: connect: connection refused" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.234471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.236484 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.246400 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-jzdls"] Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.250068 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66qp\" (UniqueName: \"kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp\") pod \"nova-cell0-623f-account-create-update-f2ks8\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.316005 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.316748 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.316881 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs7kg\" (UniqueName: \"kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.409012 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.419043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.419141 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs7kg\" (UniqueName: \"kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.420045 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.435726 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs7kg\" (UniqueName: \"kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg\") pod \"nova-cell1-d1c0-account-create-update-jzdls\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:22 crc kubenswrapper[4780]: I0219 08:41:22.592617 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.294912 4780 generic.go:334] "Generic (PLEG): container finished" podID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerID="ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973" exitCode=0 Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.295269 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerDied","Data":"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973"} Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.589879 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670498 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670539 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670603 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670700 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670773 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.670846 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zz48\" (UniqueName: \"kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48\") pod \"c070a55b-72c8-49f1-b459-c3c7a95cb573\" (UID: \"c070a55b-72c8-49f1-b459-c3c7a95cb573\") " Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.675244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.675649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.678826 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48" (OuterVolumeSpecName: "kube-api-access-8zz48") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "kube-api-access-8zz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.685612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts" (OuterVolumeSpecName: "scripts") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.732795 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: W0219 08:41:23.760999 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod873052c3_b896_4852_a0be_8c7f4b1edbf0.slice/crio-f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92 WatchSource:0}: Error finding container f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92: Status 404 returned error can't find the container with id f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92 Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.762115 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kd78h"] Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.773004 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.773031 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c070a55b-72c8-49f1-b459-c3c7a95cb573-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.773040 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zz48\" (UniqueName: \"kubernetes.io/projected/c070a55b-72c8-49f1-b459-c3c7a95cb573-kube-api-access-8zz48\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.773050 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.773059 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.778214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.805562 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data" (OuterVolumeSpecName: "config-data") pod "c070a55b-72c8-49f1-b459-c3c7a95cb573" (UID: "c070a55b-72c8-49f1-b459-c3c7a95cb573"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.879102 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.879383 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c070a55b-72c8-49f1-b459-c3c7a95cb573-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:23 crc kubenswrapper[4780]: W0219 08:41:23.916719 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636fc704_8df3_4d54_98e0_6976bbf071b2.slice/crio-79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f WatchSource:0}: Error finding container 79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f: Status 404 returned error can't find the container with id 79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.986767 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w4nft"] Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.989514 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-jzdls"] Feb 19 08:41:23 crc kubenswrapper[4780]: I0219 08:41:23.997256 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-f2ks8"] Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.004194 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9c29-account-create-update-hcthl"] Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.023373 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4cjwm"] Feb 19 08:41:24 crc kubenswrapper[4780]: W0219 08:41:24.028720 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33d5818_5750_4ac6_9016_e886177a9b4e.slice/crio-11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1 WatchSource:0}: Error finding container 11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1: Status 404 returned error can't find the container with id 11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1 Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.109373 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.109419 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.307912 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w4nft" event={"ID":"636fc704-8df3-4d54-98e0-6976bbf071b2","Type":"ContainerStarted","Data":"c8136cd50f1a775f65bf2095dc11b1215969e1fb9f8b041ab429d2039ee66443"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.307956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w4nft" event={"ID":"636fc704-8df3-4d54-98e0-6976bbf071b2","Type":"ContainerStarted","Data":"79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.321102 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" event={"ID":"c33d5818-5750-4ac6-9016-e886177a9b4e","Type":"ContainerStarted","Data":"11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.329110 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-w4nft" podStartSLOduration=3.329092293 podStartE2EDuration="3.329092293s" podCreationTimestamp="2026-02-19 08:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:24.321632709 +0000 UTC m=+1227.065290158" watchObservedRunningTime="2026-02-19 08:41:24.329092293 +0000 UTC m=+1227.072749742" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.329198 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cjwm" event={"ID":"fe6bf7c0-1e73-447f-be82-7c45be42304b","Type":"ContainerStarted","Data":"688cf4c4ee960390c218cd0228e4712d0c2c9f69f73d8ac0c3a70fe5fd4e9a64"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.333193 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc29e551-efab-43d8-94d5-1c515a76dca9","Type":"ContainerStarted","Data":"99328aca990c1786f3a96df839e96c76191fce3e843e27d37b4c746716c1d54b"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.339095 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kd78h" event={"ID":"873052c3-b896-4852-a0be-8c7f4b1edbf0","Type":"ContainerStarted","Data":"8e2d51373786153cd7127ee770c74e56db622810f20d0c53345fc6ffbf410603"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.339400 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kd78h" event={"ID":"873052c3-b896-4852-a0be-8c7f4b1edbf0","Type":"ContainerStarted","Data":"f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.347470 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.438437865 podStartE2EDuration="16.347453807s" podCreationTimestamp="2026-02-19 08:41:08 +0000 UTC" firstStartedPulling="2026-02-19 08:41:09.446814231 +0000 UTC m=+1212.190471680" lastFinishedPulling="2026-02-19 08:41:23.355830173 +0000 UTC m=+1226.099487622" observedRunningTime="2026-02-19 08:41:24.346428782 +0000 UTC m=+1227.090086231" watchObservedRunningTime="2026-02-19 08:41:24.347453807 +0000 UTC m=+1227.091111256" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.355700 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c070a55b-72c8-49f1-b459-c3c7a95cb573","Type":"ContainerDied","Data":"e2a4cc22ca47b9edf6488ec0e0e8bbe6420ac0d90786b61f402c8e9d468b1078"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.355749 4780 scope.go:117] "RemoveContainer" containerID="fda8a3c991411d59a2c8372be2edfc118e891bb7f6cbd411491c03df8b3d4fa6" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.355882 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.373468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c29-account-create-update-hcthl" event={"ID":"aace9000-22e3-4d6f-98b5-c7ce0c39f31c","Type":"ContainerStarted","Data":"84b46f497c69f0164a712329b600b8b9d01dfe772589f3593382aba1b7d75021"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.379010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" event={"ID":"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf","Type":"ContainerStarted","Data":"42e8b539a7a1e23e49ba3ff2ee0c8db3bdfbb8cc2b4403e2ee02b06be173c18f"} Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.386417 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.389066 4780 scope.go:117] "RemoveContainer" containerID="677208c92ad9b8a9eac0caac53164594983d51933b2cff6842e9ec4c7d43f72b" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.409033 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.417106 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:24 crc kubenswrapper[4780]: E0219 08:41:24.417682 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.417743 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" Feb 19 08:41:24 crc kubenswrapper[4780]: E0219 08:41:24.417824 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-central-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.417876 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-central-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: E0219 08:41:24.417929 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="sg-core" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.417979 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="sg-core" Feb 19 08:41:24 crc kubenswrapper[4780]: E0219 08:41:24.418066 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-notification-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.418134 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-notification-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.418343 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-notification-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.418407 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="ceilometer-central-agent" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.418468 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="proxy-httpd" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.418521 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" containerName="sg-core" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.419486 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9c29-account-create-update-hcthl" podStartSLOduration=3.419464737 podStartE2EDuration="3.419464737s" podCreationTimestamp="2026-02-19 08:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:24.399670107 +0000 UTC m=+1227.143327556" watchObservedRunningTime="2026-02-19 08:41:24.419464737 +0000 UTC m=+1227.163122186" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.420113 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.425927 4780 scope.go:117] "RemoveContainer" containerID="70407c617787a6edf2338681cbaa820a5dc2453f03dc45619e85306d7403b561" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.426214 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.426416 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.461195 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.463436 4780 scope.go:117] "RemoveContainer" containerID="1c285c47625009451b67e9b654cc339fc00b6bd0729bb0cdf614b340e8833ae2" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.465497 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" podStartSLOduration=2.465477644 podStartE2EDuration="2.465477644s" podCreationTimestamp="2026-02-19 08:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:24.426048269 +0000 UTC m=+1227.169705718" watchObservedRunningTime="2026-02-19 08:41:24.465477644 +0000 UTC m=+1227.209135093" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.488929 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.488985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb54h\" (UniqueName: \"kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.489017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.489082 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.489119 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.489160 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.489218 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590414 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590485 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590524 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590648 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb54h\" (UniqueName: \"kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.590725 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.591822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.592100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.597013 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.597191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.598795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.599696 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.613348 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb54h\" (UniqueName: \"kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h\") pod \"ceilometer-0\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.741401 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:24 crc kubenswrapper[4780]: I0219 08:41:24.980743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.364703 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.399558 4780 generic.go:334] "Generic (PLEG): container finished" podID="aace9000-22e3-4d6f-98b5-c7ce0c39f31c" containerID="6bc806239e81869760052d5f53e7b69eac6afe49ce83ac519cb286970ec688dd" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.399670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c29-account-create-update-hcthl" event={"ID":"aace9000-22e3-4d6f-98b5-c7ce0c39f31c","Type":"ContainerDied","Data":"6bc806239e81869760052d5f53e7b69eac6afe49ce83ac519cb286970ec688dd"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.414365 4780 generic.go:334] "Generic (PLEG): container finished" podID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerID="4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.414705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerDied","Data":"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.414732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d4f7d9cb-jgtm7" event={"ID":"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a","Type":"ContainerDied","Data":"b76218eab9e7cf05d613307002a75ed8e83d06ae7bcca9d2a724eee1bba8bee6"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.414748 4780 scope.go:117] "RemoveContainer" containerID="ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.414855 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d4f7d9cb-jgtm7" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.424469 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe6bf7c0-1e73-447f-be82-7c45be42304b" containerID="a0bdeaf3e6cabab71f82d834e0e050541dbbc381af78e6fe80d504a23266934a" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.424539 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cjwm" event={"ID":"fe6bf7c0-1e73-447f-be82-7c45be42304b","Type":"ContainerDied","Data":"a0bdeaf3e6cabab71f82d834e0e050541dbbc381af78e6fe80d504a23266934a"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.432202 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerStarted","Data":"e3bc8d91d8601a5999892ee4c116d617a32242691e235d746d301aad22003f7e"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.440214 4780 generic.go:334] "Generic (PLEG): container finished" podID="873052c3-b896-4852-a0be-8c7f4b1edbf0" containerID="8e2d51373786153cd7127ee770c74e56db622810f20d0c53345fc6ffbf410603" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.440314 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kd78h" event={"ID":"873052c3-b896-4852-a0be-8c7f4b1edbf0","Type":"ContainerDied","Data":"8e2d51373786153cd7127ee770c74e56db622810f20d0c53345fc6ffbf410603"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.445292 4780 generic.go:334] "Generic (PLEG): container finished" podID="67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" containerID="d30162c6653792d87df8e73905c42561bdf82bb30f675f92b1c1bedff12d66f4" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.445348 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" event={"ID":"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf","Type":"ContainerDied","Data":"d30162c6653792d87df8e73905c42561bdf82bb30f675f92b1c1bedff12d66f4"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.451402 4780 generic.go:334] "Generic (PLEG): container finished" podID="636fc704-8df3-4d54-98e0-6976bbf071b2" containerID="c8136cd50f1a775f65bf2095dc11b1215969e1fb9f8b041ab429d2039ee66443" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.451451 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w4nft" event={"ID":"636fc704-8df3-4d54-98e0-6976bbf071b2","Type":"ContainerDied","Data":"c8136cd50f1a775f65bf2095dc11b1215969e1fb9f8b041ab429d2039ee66443"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.452577 4780 generic.go:334] "Generic (PLEG): container finished" podID="c33d5818-5750-4ac6-9016-e886177a9b4e" containerID="65bcd04f511759d61996cfe8497192b0cf4af2b30ccf58e868bc58f2de65f5fa" exitCode=0 Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.453395 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" event={"ID":"c33d5818-5750-4ac6-9016-e886177a9b4e","Type":"ContainerDied","Data":"65bcd04f511759d61996cfe8497192b0cf4af2b30ccf58e868bc58f2de65f5fa"} Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.460186 4780 scope.go:117] "RemoveContainer" containerID="4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.506959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs\") pod \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.507116 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config\") pod \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.507200 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config\") pod \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.507248 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle\") pod \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.507316 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llx9g\" (UniqueName: \"kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g\") pod \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\" (UID: \"510700ac-52ab-4ff8-b2c5-61ce6b2acf0a\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.516103 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" (UID: "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.524826 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g" (OuterVolumeSpecName: "kube-api-access-llx9g") pod "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" (UID: "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a"). InnerVolumeSpecName "kube-api-access-llx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.557623 4780 scope.go:117] "RemoveContainer" containerID="ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973" Feb 19 08:41:25 crc kubenswrapper[4780]: E0219 08:41:25.558235 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973\": container with ID starting with ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973 not found: ID does not exist" containerID="ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.558269 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973"} err="failed to get container status \"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973\": rpc error: code = NotFound desc = could not find container \"ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973\": container with ID starting with ae9f110cc37360b828006c2e2174e01ff591ef4ee19a21ed8472c4f50b944973 not found: ID does not exist" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.558325 4780 scope.go:117] "RemoveContainer" containerID="4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731" Feb 19 08:41:25 crc kubenswrapper[4780]: E0219 08:41:25.558683 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731\": container with ID starting with 4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731 not found: ID does not exist" containerID="4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.558736 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731"} err="failed to get container status \"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731\": rpc error: code = NotFound desc = could not find container \"4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731\": container with ID starting with 4a6fb4e0b903065a11ed96a0c68246e925a92c53f4f8a232cab35352a32ca731 not found: ID does not exist" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.586166 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config" (OuterVolumeSpecName: "config") pod "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" (UID: "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.590216 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" (UID: "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.609803 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.609837 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.609847 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llx9g\" (UniqueName: \"kubernetes.io/projected/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-kube-api-access-llx9g\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.609856 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.698289 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" (UID: "510700ac-52ab-4ff8-b2c5-61ce6b2acf0a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.712906 4780 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.732200 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.758671 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.773494 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55d4f7d9cb-jgtm7"] Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.813657 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdvqj\" (UniqueName: \"kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj\") pod \"873052c3-b896-4852-a0be-8c7f4b1edbf0\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.813707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts\") pod \"873052c3-b896-4852-a0be-8c7f4b1edbf0\" (UID: \"873052c3-b896-4852-a0be-8c7f4b1edbf0\") " Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.814514 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "873052c3-b896-4852-a0be-8c7f4b1edbf0" (UID: "873052c3-b896-4852-a0be-8c7f4b1edbf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.817312 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj" (OuterVolumeSpecName: "kube-api-access-vdvqj") pod "873052c3-b896-4852-a0be-8c7f4b1edbf0" (UID: "873052c3-b896-4852-a0be-8c7f4b1edbf0"). InnerVolumeSpecName "kube-api-access-vdvqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.915880 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdvqj\" (UniqueName: \"kubernetes.io/projected/873052c3-b896-4852-a0be-8c7f4b1edbf0-kube-api-access-vdvqj\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.916117 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873052c3-b896-4852-a0be-8c7f4b1edbf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.947798 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" path="/var/lib/kubelet/pods/510700ac-52ab-4ff8-b2c5-61ce6b2acf0a/volumes" Feb 19 08:41:25 crc kubenswrapper[4780]: I0219 08:41:25.948423 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c070a55b-72c8-49f1-b459-c3c7a95cb573" path="/var/lib/kubelet/pods/c070a55b-72c8-49f1-b459-c3c7a95cb573/volumes" Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.464274 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerStarted","Data":"e1aa0ed4a96a5debd0d36fcc25941ee4c4954156c5a0cd6fe7987054b9b5ba98"} Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.464626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerStarted","Data":"4b589d19b5ffe8bd63b080008fac105a6e8c22071b72db1a0c0d2a820dcb5704"} Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.466367 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kd78h" Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.467279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kd78h" event={"ID":"873052c3-b896-4852-a0be-8c7f4b1edbf0","Type":"ContainerDied","Data":"f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92"} Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.467322 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a8ca22d94b7fe46573ea23f09f045fdb92f2609b49f631e7c646fd92d80b92" Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.864065 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.942644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67xpj\" (UniqueName: \"kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj\") pod \"636fc704-8df3-4d54-98e0-6976bbf071b2\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.943006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts\") pod \"636fc704-8df3-4d54-98e0-6976bbf071b2\" (UID: \"636fc704-8df3-4d54-98e0-6976bbf071b2\") " Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.944027 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "636fc704-8df3-4d54-98e0-6976bbf071b2" (UID: "636fc704-8df3-4d54-98e0-6976bbf071b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:26 crc kubenswrapper[4780]: I0219 08:41:26.949532 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj" (OuterVolumeSpecName: "kube-api-access-67xpj") pod "636fc704-8df3-4d54-98e0-6976bbf071b2" (UID: "636fc704-8df3-4d54-98e0-6976bbf071b2"). InnerVolumeSpecName "kube-api-access-67xpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.044863 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67xpj\" (UniqueName: \"kubernetes.io/projected/636fc704-8df3-4d54-98e0-6976bbf071b2-kube-api-access-67xpj\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.044891 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636fc704-8df3-4d54-98e0-6976bbf071b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.121427 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.128439 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.147161 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.154248 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.247872 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts\") pod \"c33d5818-5750-4ac6-9016-e886177a9b4e\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248180 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm84l\" (UniqueName: \"kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l\") pod \"fe6bf7c0-1e73-447f-be82-7c45be42304b\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248240 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66qp\" (UniqueName: \"kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp\") pod \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248275 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs7kg\" (UniqueName: \"kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg\") pod \"c33d5818-5750-4ac6-9016-e886177a9b4e\" (UID: \"c33d5818-5750-4ac6-9016-e886177a9b4e\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248305 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjhb8\" (UniqueName: \"kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8\") pod \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts\") pod \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\" (UID: \"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts\") pod \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\" (UID: \"aace9000-22e3-4d6f-98b5-c7ce0c39f31c\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts\") pod \"fe6bf7c0-1e73-447f-be82-7c45be42304b\" (UID: \"fe6bf7c0-1e73-447f-be82-7c45be42304b\") " Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248573 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c33d5818-5750-4ac6-9016-e886177a9b4e" (UID: "c33d5818-5750-4ac6-9016-e886177a9b4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.248977 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33d5818-5750-4ac6-9016-e886177a9b4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.249027 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" (UID: "67b6be5d-b212-4ef8-8bed-3e9e4337a0bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.249931 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe6bf7c0-1e73-447f-be82-7c45be42304b" (UID: "fe6bf7c0-1e73-447f-be82-7c45be42304b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.250064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aace9000-22e3-4d6f-98b5-c7ce0c39f31c" (UID: "aace9000-22e3-4d6f-98b5-c7ce0c39f31c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.254170 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8" (OuterVolumeSpecName: "kube-api-access-rjhb8") pod "aace9000-22e3-4d6f-98b5-c7ce0c39f31c" (UID: "aace9000-22e3-4d6f-98b5-c7ce0c39f31c"). InnerVolumeSpecName "kube-api-access-rjhb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.254172 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l" (OuterVolumeSpecName: "kube-api-access-dm84l") pod "fe6bf7c0-1e73-447f-be82-7c45be42304b" (UID: "fe6bf7c0-1e73-447f-be82-7c45be42304b"). InnerVolumeSpecName "kube-api-access-dm84l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.254197 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp" (OuterVolumeSpecName: "kube-api-access-l66qp") pod "67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" (UID: "67b6be5d-b212-4ef8-8bed-3e9e4337a0bf"). InnerVolumeSpecName "kube-api-access-l66qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.254207 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg" (OuterVolumeSpecName: "kube-api-access-gs7kg") pod "c33d5818-5750-4ac6-9016-e886177a9b4e" (UID: "c33d5818-5750-4ac6-9016-e886177a9b4e"). InnerVolumeSpecName "kube-api-access-gs7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.350597 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm84l\" (UniqueName: \"kubernetes.io/projected/fe6bf7c0-1e73-447f-be82-7c45be42304b-kube-api-access-dm84l\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.350836 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66qp\" (UniqueName: \"kubernetes.io/projected/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-kube-api-access-l66qp\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.350933 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs7kg\" (UniqueName: \"kubernetes.io/projected/c33d5818-5750-4ac6-9016-e886177a9b4e-kube-api-access-gs7kg\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.350999 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjhb8\" (UniqueName: \"kubernetes.io/projected/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-kube-api-access-rjhb8\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.351075 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.351167 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aace9000-22e3-4d6f-98b5-c7ce0c39f31c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.351233 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6bf7c0-1e73-447f-be82-7c45be42304b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.477857 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w4nft" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.483318 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w4nft" event={"ID":"636fc704-8df3-4d54-98e0-6976bbf071b2","Type":"ContainerDied","Data":"79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.483366 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ae61527332e21024265dbde563e1b8b9c6f7f68be9a9a5dedbffc1b02c589f" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.484553 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" event={"ID":"c33d5818-5750-4ac6-9016-e886177a9b4e","Type":"ContainerDied","Data":"11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.484575 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a438f01212e9681af0466bb1c6ea84d87daeb5642a5013fb8c150c8e6e88a1" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.484592 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-jzdls" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.485762 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cjwm" event={"ID":"fe6bf7c0-1e73-447f-be82-7c45be42304b","Type":"ContainerDied","Data":"688cf4c4ee960390c218cd0228e4712d0c2c9f69f73d8ac0c3a70fe5fd4e9a64"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.485783 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688cf4c4ee960390c218cd0228e4712d0c2c9f69f73d8ac0c3a70fe5fd4e9a64" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.485861 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cjwm" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.487540 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerStarted","Data":"f1e8b8658d752484fbf0a5aaca751c392c9d9875069730f6c44b7a0f5711bdd6"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.488696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c29-account-create-update-hcthl" event={"ID":"aace9000-22e3-4d6f-98b5-c7ce0c39f31c","Type":"ContainerDied","Data":"84b46f497c69f0164a712329b600b8b9d01dfe772589f3593382aba1b7d75021"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.488713 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-hcthl" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.488721 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84b46f497c69f0164a712329b600b8b9d01dfe772589f3593382aba1b7d75021" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.489841 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" event={"ID":"67b6be5d-b212-4ef8-8bed-3e9e4337a0bf","Type":"ContainerDied","Data":"42e8b539a7a1e23e49ba3ff2ee0c8db3bdfbb8cc2b4403e2ee02b06be173c18f"} Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.489872 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42e8b539a7a1e23e49ba3ff2ee0c8db3bdfbb8cc2b4403e2ee02b06be173c18f" Feb 19 08:41:27 crc kubenswrapper[4780]: I0219 08:41:27.489902 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-f2ks8" Feb 19 08:41:29 crc kubenswrapper[4780]: I0219 08:41:29.511795 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerStarted","Data":"fbc5391abe0300f94b2550438d70c70dcc87f8ec8a58fbd3da085ecc3bfb17a9"} Feb 19 08:41:29 crc kubenswrapper[4780]: I0219 08:41:29.512739 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:41:29 crc kubenswrapper[4780]: I0219 08:41:29.540888 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.274500928 podStartE2EDuration="5.540863164s" podCreationTimestamp="2026-02-19 08:41:24 +0000 UTC" firstStartedPulling="2026-02-19 08:41:24.995272736 +0000 UTC m=+1227.738930185" lastFinishedPulling="2026-02-19 08:41:28.261634972 +0000 UTC m=+1231.005292421" observedRunningTime="2026-02-19 08:41:29.53949205 +0000 UTC m=+1232.283149499" watchObservedRunningTime="2026-02-19 08:41:29.540863164 +0000 UTC m=+1232.284520613" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.331672 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r6stw"] Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.332834 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d5818-5750-4ac6-9016-e886177a9b4e" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.332859 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d5818-5750-4ac6-9016-e886177a9b4e" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.332894 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.332906 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.332932 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-api" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.332945 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-api" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.332966 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636fc704-8df3-4d54-98e0-6976bbf071b2" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.332977 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="636fc704-8df3-4d54-98e0-6976bbf071b2" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.333010 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-httpd" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333022 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-httpd" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.333047 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aace9000-22e3-4d6f-98b5-c7ce0c39f31c" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333059 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aace9000-22e3-4d6f-98b5-c7ce0c39f31c" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.333086 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6bf7c0-1e73-447f-be82-7c45be42304b" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333097 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6bf7c0-1e73-447f-be82-7c45be42304b" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: E0219 08:41:32.333113 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873052c3-b896-4852-a0be-8c7f4b1edbf0" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333154 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="873052c3-b896-4852-a0be-8c7f4b1edbf0" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333437 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333471 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="636fc704-8df3-4d54-98e0-6976bbf071b2" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333499 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6bf7c0-1e73-447f-be82-7c45be42304b" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333512 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-httpd" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333535 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33d5818-5750-4ac6-9016-e886177a9b4e" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333557 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="873052c3-b896-4852-a0be-8c7f4b1edbf0" containerName="mariadb-database-create" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333571 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="510700ac-52ab-4ff8-b2c5-61ce6b2acf0a" containerName="neutron-api" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.333591 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aace9000-22e3-4d6f-98b5-c7ce0c39f31c" containerName="mariadb-account-create-update" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.334534 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.337056 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.337467 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.338843 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b5zfc" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.340298 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r6stw"] Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.443410 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.443775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.444068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2f8\" (UniqueName: \"kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.444274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.546016 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk2f8\" (UniqueName: \"kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.546084 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.546188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.546251 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.552003 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.552525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.553972 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.565006 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk2f8\" (UniqueName: \"kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8\") pod \"nova-cell0-conductor-db-sync-r6stw\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:32 crc kubenswrapper[4780]: I0219 08:41:32.666054 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:33 crc kubenswrapper[4780]: I0219 08:41:33.160096 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r6stw"] Feb 19 08:41:33 crc kubenswrapper[4780]: I0219 08:41:33.583493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r6stw" event={"ID":"2ced3cd6-55dd-41ca-a3cb-25862916cfcd","Type":"ContainerStarted","Data":"7e401294f7821c70c2dc7a875afb141a4bd97b63ce63edefade4ead9553babf1"} Feb 19 08:41:35 crc kubenswrapper[4780]: I0219 08:41:35.456425 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:35 crc kubenswrapper[4780]: I0219 08:41:35.457414 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-httpd" containerID="cri-o://e4c6412a78837024fd2d94325b0f9362535152ef61396c7e5123de2e2c43a794" gracePeriod=30 Feb 19 08:41:35 crc kubenswrapper[4780]: I0219 08:41:35.457348 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-log" containerID="cri-o://626adbcd162eb220882fdd6339ba5675a7ad356ad4f45ccb34ceb5c9adeeeec3" gracePeriod=30 Feb 19 08:41:35 crc kubenswrapper[4780]: I0219 08:41:35.605991 4780 generic.go:334] "Generic (PLEG): container finished" podID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerID="626adbcd162eb220882fdd6339ba5675a7ad356ad4f45ccb34ceb5c9adeeeec3" exitCode=143 Feb 19 08:41:35 crc kubenswrapper[4780]: I0219 08:41:35.606051 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerDied","Data":"626adbcd162eb220882fdd6339ba5675a7ad356ad4f45ccb34ceb5c9adeeeec3"} Feb 19 08:41:36 crc kubenswrapper[4780]: I0219 08:41:36.684445 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:36 crc kubenswrapper[4780]: I0219 08:41:36.684812 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-central-agent" containerID="cri-o://4b589d19b5ffe8bd63b080008fac105a6e8c22071b72db1a0c0d2a820dcb5704" gracePeriod=30 Feb 19 08:41:36 crc kubenswrapper[4780]: I0219 08:41:36.684894 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="proxy-httpd" containerID="cri-o://fbc5391abe0300f94b2550438d70c70dcc87f8ec8a58fbd3da085ecc3bfb17a9" gracePeriod=30 Feb 19 08:41:36 crc kubenswrapper[4780]: I0219 08:41:36.684909 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-notification-agent" containerID="cri-o://e1aa0ed4a96a5debd0d36fcc25941ee4c4954156c5a0cd6fe7987054b9b5ba98" gracePeriod=30 Feb 19 08:41:36 crc kubenswrapper[4780]: I0219 08:41:36.686243 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="sg-core" containerID="cri-o://f1e8b8658d752484fbf0a5aaca751c392c9d9875069730f6c44b7a0f5711bdd6" gracePeriod=30 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.497820 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.498048 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-log" containerID="cri-o://d880363aee9763fc6a800e80dfeed449e140b3a7295080df9834ff9a3bd002db" gracePeriod=30 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.498149 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-httpd" containerID="cri-o://b0d24751d84a3171fbef0fecf9836682ec7b58e2857bc3f42252296301bb363d" gracePeriod=30 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.629150 4780 generic.go:334] "Generic (PLEG): container finished" podID="5e9345e6-0539-439d-a341-112ec8638694" containerID="d880363aee9763fc6a800e80dfeed449e140b3a7295080df9834ff9a3bd002db" exitCode=143 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.629228 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerDied","Data":"d880363aee9763fc6a800e80dfeed449e140b3a7295080df9834ff9a3bd002db"} Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.636908 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerID="fbc5391abe0300f94b2550438d70c70dcc87f8ec8a58fbd3da085ecc3bfb17a9" exitCode=0 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.636950 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerID="f1e8b8658d752484fbf0a5aaca751c392c9d9875069730f6c44b7a0f5711bdd6" exitCode=2 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.636961 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerID="4b589d19b5ffe8bd63b080008fac105a6e8c22071b72db1a0c0d2a820dcb5704" exitCode=0 Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.636992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerDied","Data":"fbc5391abe0300f94b2550438d70c70dcc87f8ec8a58fbd3da085ecc3bfb17a9"} Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.637039 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerDied","Data":"f1e8b8658d752484fbf0a5aaca751c392c9d9875069730f6c44b7a0f5711bdd6"} Feb 19 08:41:37 crc kubenswrapper[4780]: I0219 08:41:37.637052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerDied","Data":"4b589d19b5ffe8bd63b080008fac105a6e8c22071b72db1a0c0d2a820dcb5704"} Feb 19 08:41:39 crc kubenswrapper[4780]: I0219 08:41:39.660696 4780 generic.go:334] "Generic (PLEG): container finished" podID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerID="e4c6412a78837024fd2d94325b0f9362535152ef61396c7e5123de2e2c43a794" exitCode=0 Feb 19 08:41:39 crc kubenswrapper[4780]: I0219 08:41:39.661216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerDied","Data":"e4c6412a78837024fd2d94325b0f9362535152ef61396c7e5123de2e2c43a794"} Feb 19 08:41:39 crc kubenswrapper[4780]: I0219 08:41:39.664378 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerID="e1aa0ed4a96a5debd0d36fcc25941ee4c4954156c5a0cd6fe7987054b9b5ba98" exitCode=0 Feb 19 08:41:39 crc kubenswrapper[4780]: I0219 08:41:39.664422 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerDied","Data":"e1aa0ed4a96a5debd0d36fcc25941ee4c4954156c5a0cd6fe7987054b9b5ba98"} Feb 19 08:41:40 crc kubenswrapper[4780]: I0219 08:41:40.581933 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.146:9292/healthcheck\": dial tcp 10.217.0.146:9292: connect: connection refused" Feb 19 08:41:40 crc kubenswrapper[4780]: I0219 08:41:40.581976 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.146:9292/healthcheck\": dial tcp 10.217.0.146:9292: connect: connection refused" Feb 19 08:41:41 crc kubenswrapper[4780]: I0219 08:41:41.684018 4780 generic.go:334] "Generic (PLEG): container finished" podID="5e9345e6-0539-439d-a341-112ec8638694" containerID="b0d24751d84a3171fbef0fecf9836682ec7b58e2857bc3f42252296301bb363d" exitCode=0 Feb 19 08:41:41 crc kubenswrapper[4780]: I0219 08:41:41.684067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerDied","Data":"b0d24751d84a3171fbef0fecf9836682ec7b58e2857bc3f42252296301bb363d"} Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.456395 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.467726 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.475407 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577788 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577838 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsc6\" (UniqueName: \"kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577866 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577895 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577923 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577956 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.577981 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578029 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578075 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4glg\" (UniqueName: \"kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578102 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578146 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578185 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578255 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578442 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts\") pod \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\" (UID: \"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578522 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578591 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578620 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578647 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb54h\" (UniqueName: \"kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578682 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run\") pod \"5e9345e6-0539-439d-a341-112ec8638694\" (UID: \"5e9345e6-0539-439d-a341-112ec8638694\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.578708 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts\") pod \"4c49a6da-1025-4edb-9afc-12c553fc795d\" (UID: \"4c49a6da-1025-4edb-9afc-12c553fc795d\") " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.586612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.588163 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts" (OuterVolumeSpecName: "scripts") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.593994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.594264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.595151 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg" (OuterVolumeSpecName: "kube-api-access-n4glg") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "kube-api-access-n4glg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.597094 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.597189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.599763 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs" (OuterVolumeSpecName: "logs") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.600439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs" (OuterVolumeSpecName: "logs") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.610981 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts" (OuterVolumeSpecName: "scripts") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.614166 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts" (OuterVolumeSpecName: "scripts") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.621179 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6" (OuterVolumeSpecName: "kube-api-access-vtsc6") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "kube-api-access-vtsc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.643016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h" (OuterVolumeSpecName: "kube-api-access-sb54h") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "kube-api-access-sb54h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.644289 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.665856 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683264 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683334 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683349 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c49a6da-1025-4edb-9afc-12c553fc795d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683364 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4glg\" (UniqueName: \"kubernetes.io/projected/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-kube-api-access-n4glg\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683401 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683420 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683435 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683448 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683459 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683472 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683485 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb54h\" (UniqueName: \"kubernetes.io/projected/4c49a6da-1025-4edb-9afc-12c553fc795d-kube-api-access-sb54h\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683498 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e9345e6-0539-439d-a341-112ec8638694-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683510 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683522 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsc6\" (UniqueName: \"kubernetes.io/projected/5e9345e6-0539-439d-a341-112ec8638694-kube-api-access-vtsc6\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.683535 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.706237 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.744969 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.745337 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c49a6da-1025-4edb-9afc-12c553fc795d","Type":"ContainerDied","Data":"e3bc8d91d8601a5999892ee4c116d617a32242691e235d746d301aad22003f7e"} Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.745912 4780 scope.go:117] "RemoveContainer" containerID="fbc5391abe0300f94b2550438d70c70dcc87f8ec8a58fbd3da085ecc3bfb17a9" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.767895 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e9345e6-0539-439d-a341-112ec8638694","Type":"ContainerDied","Data":"31cf59dc8ddde18777a881c43db8495a39fdb1af21796decc53ae9c18bad8841"} Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.768137 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.771469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.776977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea2c4c02-d3f9-4007-b334-9dcb153e2d3d","Type":"ContainerDied","Data":"672795fc6c95ba9703cb06d2def7f277aaf6d67de1c4ed89e49b60fea548f47e"} Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.777169 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.804739 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r6stw" event={"ID":"2ced3cd6-55dd-41ca-a3cb-25862916cfcd","Type":"ContainerStarted","Data":"3ff7cf1b87b3928c92dd3bcafc98d39fa7f7f44d629b68dc9b1be48e0e5a72f3"} Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.807844 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.815972 4780 scope.go:117] "RemoveContainer" containerID="f1e8b8658d752484fbf0a5aaca751c392c9d9875069730f6c44b7a0f5711bdd6" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.816561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.816587 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.816669 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.816690 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.830854 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.833416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.835749 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-r6stw" podStartSLOduration=1.767322547 podStartE2EDuration="11.835704391s" podCreationTimestamp="2026-02-19 08:41:32 +0000 UTC" firstStartedPulling="2026-02-19 08:41:33.165248088 +0000 UTC m=+1235.908905527" lastFinishedPulling="2026-02-19 08:41:43.233629922 +0000 UTC m=+1245.977287371" observedRunningTime="2026-02-19 08:41:43.827468137 +0000 UTC m=+1246.571125586" watchObservedRunningTime="2026-02-19 08:41:43.835704391 +0000 UTC m=+1246.579361850" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.847930 4780 scope.go:117] "RemoveContainer" containerID="e1aa0ed4a96a5debd0d36fcc25941ee4c4954156c5a0cd6fe7987054b9b5ba98" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.849695 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data" (OuterVolumeSpecName: "config-data") pod "5e9345e6-0539-439d-a341-112ec8638694" (UID: "5e9345e6-0539-439d-a341-112ec8638694"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.852425 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data" (OuterVolumeSpecName: "config-data") pod "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" (UID: "ea2c4c02-d3f9-4007-b334-9dcb153e2d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.855896 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.871470 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data" (OuterVolumeSpecName: "config-data") pod "4c49a6da-1025-4edb-9afc-12c553fc795d" (UID: "4c49a6da-1025-4edb-9afc-12c553fc795d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.872030 4780 scope.go:117] "RemoveContainer" containerID="4b589d19b5ffe8bd63b080008fac105a6e8c22071b72db1a0c0d2a820dcb5704" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.892552 4780 scope.go:117] "RemoveContainer" containerID="b0d24751d84a3171fbef0fecf9836682ec7b58e2857bc3f42252296301bb363d" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918573 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918610 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c49a6da-1025-4edb-9afc-12c553fc795d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918619 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918631 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918640 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918648 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.918656 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9345e6-0539-439d-a341-112ec8638694-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.921635 4780 scope.go:117] "RemoveContainer" containerID="d880363aee9763fc6a800e80dfeed449e140b3a7295080df9834ff9a3bd002db" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.953508 4780 scope.go:117] "RemoveContainer" containerID="e4c6412a78837024fd2d94325b0f9362535152ef61396c7e5123de2e2c43a794" Feb 19 08:41:43 crc kubenswrapper[4780]: I0219 08:41:43.991837 4780 scope.go:117] "RemoveContainer" containerID="626adbcd162eb220882fdd6339ba5675a7ad356ad4f45ccb34ceb5c9adeeeec3" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.086132 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.098948 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.111472 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.120559 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.130817 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131468 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131489 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131502 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131510 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131535 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="sg-core" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131542 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="sg-core" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131557 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-notification-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131563 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-notification-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131575 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="proxy-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131580 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="proxy-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131594 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131603 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131621 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-central-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131628 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-central-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: E0219 08:41:44.131641 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131648 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131836 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="proxy-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131870 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131878 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-notification-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131891 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131898 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-log" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131908 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="sg-core" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131925 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9345e6-0539-439d-a341-112ec8638694" containerName="glance-httpd" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.131936 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" containerName="ceilometer-central-agent" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.133962 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.142006 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.157067 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.157392 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.169088 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.171711 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.205099 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.206522 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.216936 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.217146 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.217238 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.217405 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xmvzm" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.217529 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.219044 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225561 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225621 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225655 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225707 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnjw\" (UniqueName: \"kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.225808 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.226353 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.226629 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.237293 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.252848 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327025 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327099 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327245 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26bp\" (UniqueName: \"kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327309 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327437 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327587 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327610 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnjw\" (UniqueName: \"kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327664 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327684 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327702 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327733 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zg7m\" (UniqueName: \"kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327798 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327823 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.327853 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.330613 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.330849 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.334583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.344083 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.344737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.345092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.348978 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnjw\" (UniqueName: \"kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw\") pod \"ceilometer-0\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26bp\" (UniqueName: \"kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430799 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430910 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.430987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431017 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zg7m\" (UniqueName: \"kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431190 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431232 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431276 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431329 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.431555 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.432311 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.432845 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.433177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.433744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.435442 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.436336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.437108 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.437335 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.443888 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.444461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.447780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.448954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.453137 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26bp\" (UniqueName: \"kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.458250 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zg7m\" (UniqueName: \"kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.482447 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " pod="openstack/glance-default-external-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.491395 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.492757 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.584489 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:44 crc kubenswrapper[4780]: I0219 08:41:44.593062 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:41:45 crc kubenswrapper[4780]: W0219 08:41:45.000020 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea93a11_04b6_4394_9f72_576540d36f5d.slice/crio-c6d61d0c2f0c8ac956fdbc623294926cc69eebbb0c75d5282f7c39d154abce44 WatchSource:0}: Error finding container c6d61d0c2f0c8ac956fdbc623294926cc69eebbb0c75d5282f7c39d154abce44: Status 404 returned error can't find the container with id c6d61d0c2f0c8ac956fdbc623294926cc69eebbb0c75d5282f7c39d154abce44 Feb 19 08:41:45 crc kubenswrapper[4780]: I0219 08:41:45.005529 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:41:45 crc kubenswrapper[4780]: I0219 08:41:45.298969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:41:45 crc kubenswrapper[4780]: I0219 08:41:45.378657 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:41:45 crc kubenswrapper[4780]: W0219 08:41:45.391648 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a69047c_4c8d_4b93_82b3_005a9e83f686.slice/crio-9a41684c0474d3e51271bf7ca643fee47573946e045e610d6bf81b446e427ef7 WatchSource:0}: Error finding container 9a41684c0474d3e51271bf7ca643fee47573946e045e610d6bf81b446e427ef7: Status 404 returned error can't find the container with id 9a41684c0474d3e51271bf7ca643fee47573946e045e610d6bf81b446e427ef7 Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.238376 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c49a6da-1025-4edb-9afc-12c553fc795d" path="/var/lib/kubelet/pods/4c49a6da-1025-4edb-9afc-12c553fc795d/volumes" Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.239620 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9345e6-0539-439d-a341-112ec8638694" path="/var/lib/kubelet/pods/5e9345e6-0539-439d-a341-112ec8638694/volumes" Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.242213 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2c4c02-d3f9-4007-b334-9dcb153e2d3d" path="/var/lib/kubelet/pods/ea2c4c02-d3f9-4007-b334-9dcb153e2d3d/volumes" Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.244963 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerStarted","Data":"9a41684c0474d3e51271bf7ca643fee47573946e045e610d6bf81b446e427ef7"} Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.245537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerStarted","Data":"c6d61d0c2f0c8ac956fdbc623294926cc69eebbb0c75d5282f7c39d154abce44"} Feb 19 08:41:46 crc kubenswrapper[4780]: I0219 08:41:46.253316 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerStarted","Data":"370598fe716fe885844bbff003aa132b19df1be1ff9b55be0ec1fa7bdd383e79"} Feb 19 08:41:47 crc kubenswrapper[4780]: I0219 08:41:47.270724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerStarted","Data":"4162407cdf5682d804be4b4717c823043ddf3ab7e9943293c605618e3930edf7"} Feb 19 08:41:47 crc kubenswrapper[4780]: I0219 08:41:47.273808 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerStarted","Data":"511ae1a6b95e07069083114a3d15f66169e2683396feb32e7d98594881f3165c"} Feb 19 08:41:47 crc kubenswrapper[4780]: I0219 08:41:47.275412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerStarted","Data":"7fd9123c3c93cf97de952f4968c96a4b69a44f2e5159ea4589984dd56c117f1e"} Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.285733 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerStarted","Data":"bc5ef7ef4bd750a4764ce70848d7c646155afde6f4f12a3f795f04acd33850e5"} Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.286177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerStarted","Data":"488d4e0a6038546d21ae96cb49f871e3d307c55c4627631365523f8c0e3d5704"} Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.287291 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerStarted","Data":"bcacaffefa0805038ec68a239723691428dbbee367f236e1e7e7b362dd644e5e"} Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.289823 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerStarted","Data":"470d613c3f2933cabeb420246069bef8c1516a00e6cebf54fd8f45fec126403e"} Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.314784 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.3147638950000005 podStartE2EDuration="4.314763895s" podCreationTimestamp="2026-02-19 08:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:48.306252105 +0000 UTC m=+1251.049909554" watchObservedRunningTime="2026-02-19 08:41:48.314763895 +0000 UTC m=+1251.058421354" Feb 19 08:41:48 crc kubenswrapper[4780]: I0219 08:41:48.335868 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.335843906 podStartE2EDuration="4.335843906s" podCreationTimestamp="2026-02-19 08:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:41:48.330433743 +0000 UTC m=+1251.074091192" watchObservedRunningTime="2026-02-19 08:41:48.335843906 +0000 UTC m=+1251.079501365" Feb 19 08:41:50 crc kubenswrapper[4780]: I0219 08:41:50.309046 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerStarted","Data":"1c5c0ed331390b6094b925152821cd290354a6796a55cd56bab6e1424fa74311"} Feb 19 08:41:50 crc kubenswrapper[4780]: I0219 08:41:50.310638 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:41:50 crc kubenswrapper[4780]: I0219 08:41:50.343454 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.027254369 podStartE2EDuration="6.343435117s" podCreationTimestamp="2026-02-19 08:41:44 +0000 UTC" firstStartedPulling="2026-02-19 08:41:45.00230174 +0000 UTC m=+1247.745959199" lastFinishedPulling="2026-02-19 08:41:49.318482498 +0000 UTC m=+1252.062139947" observedRunningTime="2026-02-19 08:41:50.334646469 +0000 UTC m=+1253.078303928" watchObservedRunningTime="2026-02-19 08:41:50.343435117 +0000 UTC m=+1253.087092566" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.584725 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.585874 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.593512 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.593697 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.664705 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.670715 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.691957 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:54 crc kubenswrapper[4780]: I0219 08:41:54.726908 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 08:41:55 crc kubenswrapper[4780]: I0219 08:41:55.633007 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:55 crc kubenswrapper[4780]: I0219 08:41:55.633888 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 08:41:55 crc kubenswrapper[4780]: I0219 08:41:55.634042 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:55 crc kubenswrapper[4780]: I0219 08:41:55.634328 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.535395 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.653035 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.654595 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ced3cd6-55dd-41ca-a3cb-25862916cfcd" containerID="3ff7cf1b87b3928c92dd3bcafc98d39fa7f7f44d629b68dc9b1be48e0e5a72f3" exitCode=0 Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.654684 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.655649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r6stw" event={"ID":"2ced3cd6-55dd-41ca-a3cb-25862916cfcd","Type":"ContainerDied","Data":"3ff7cf1b87b3928c92dd3bcafc98d39fa7f7f44d629b68dc9b1be48e0e5a72f3"} Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.655775 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.666852 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 08:41:57 crc kubenswrapper[4780]: I0219 08:41:57.713548 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.213494 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.310202 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle\") pod \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.310297 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk2f8\" (UniqueName: \"kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8\") pod \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.310390 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts\") pod \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.310464 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data\") pod \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\" (UID: \"2ced3cd6-55dd-41ca-a3cb-25862916cfcd\") " Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.317039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8" (OuterVolumeSpecName: "kube-api-access-lk2f8") pod "2ced3cd6-55dd-41ca-a3cb-25862916cfcd" (UID: "2ced3cd6-55dd-41ca-a3cb-25862916cfcd"). InnerVolumeSpecName "kube-api-access-lk2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.317213 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts" (OuterVolumeSpecName: "scripts") pod "2ced3cd6-55dd-41ca-a3cb-25862916cfcd" (UID: "2ced3cd6-55dd-41ca-a3cb-25862916cfcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.337488 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data" (OuterVolumeSpecName: "config-data") pod "2ced3cd6-55dd-41ca-a3cb-25862916cfcd" (UID: "2ced3cd6-55dd-41ca-a3cb-25862916cfcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.338280 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ced3cd6-55dd-41ca-a3cb-25862916cfcd" (UID: "2ced3cd6-55dd-41ca-a3cb-25862916cfcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.412829 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.412873 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk2f8\" (UniqueName: \"kubernetes.io/projected/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-kube-api-access-lk2f8\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.412887 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.412897 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ced3cd6-55dd-41ca-a3cb-25862916cfcd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.686495 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r6stw" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.686562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r6stw" event={"ID":"2ced3cd6-55dd-41ca-a3cb-25862916cfcd","Type":"ContainerDied","Data":"7e401294f7821c70c2dc7a875afb141a4bd97b63ce63edefade4ead9553babf1"} Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.686620 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e401294f7821c70c2dc7a875afb141a4bd97b63ce63edefade4ead9553babf1" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.962762 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:41:59 crc kubenswrapper[4780]: E0219 08:41:59.965635 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ced3cd6-55dd-41ca-a3cb-25862916cfcd" containerName="nova-cell0-conductor-db-sync" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.965808 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ced3cd6-55dd-41ca-a3cb-25862916cfcd" containerName="nova-cell0-conductor-db-sync" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.966110 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ced3cd6-55dd-41ca-a3cb-25862916cfcd" containerName="nova-cell0-conductor-db-sync" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.966763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.972217 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b5zfc" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.972553 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 08:41:59 crc kubenswrapper[4780]: I0219 08:41:59.999304 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.023671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.023789 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.023849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7w7n\" (UniqueName: \"kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.125609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.125898 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.125938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7w7n\" (UniqueName: \"kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.130391 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.133655 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.141484 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7w7n\" (UniqueName: \"kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n\") pod \"nova-cell0-conductor-0\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.306554 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:00 crc kubenswrapper[4780]: I0219 08:42:00.748222 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:42:00 crc kubenswrapper[4780]: W0219 08:42:00.748578 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51a5891a_27e3_404a_b8c8_51c2399e8903.slice/crio-5cf2d77de3da8e1e323bc1e83691ae27f0a79aa622128fa5ce05302023b8d99f WatchSource:0}: Error finding container 5cf2d77de3da8e1e323bc1e83691ae27f0a79aa622128fa5ce05302023b8d99f: Status 404 returned error can't find the container with id 5cf2d77de3da8e1e323bc1e83691ae27f0a79aa622128fa5ce05302023b8d99f Feb 19 08:42:01 crc kubenswrapper[4780]: I0219 08:42:01.707406 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51a5891a-27e3-404a-b8c8-51c2399e8903","Type":"ContainerStarted","Data":"db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869"} Feb 19 08:42:01 crc kubenswrapper[4780]: I0219 08:42:01.707821 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51a5891a-27e3-404a-b8c8-51c2399e8903","Type":"ContainerStarted","Data":"5cf2d77de3da8e1e323bc1e83691ae27f0a79aa622128fa5ce05302023b8d99f"} Feb 19 08:42:01 crc kubenswrapper[4780]: I0219 08:42:01.708870 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:01 crc kubenswrapper[4780]: I0219 08:42:01.736642 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.736622549 podStartE2EDuration="2.736622549s" podCreationTimestamp="2026-02-19 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:01.725446593 +0000 UTC m=+1264.469104042" watchObservedRunningTime="2026-02-19 08:42:01.736622549 +0000 UTC m=+1264.480279998" Feb 19 08:42:05 crc kubenswrapper[4780]: I0219 08:42:05.345625 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.051002 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4q56p"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.057660 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.061439 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.067929 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.070162 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4q56p"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.135162 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.135219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.135271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbl6k\" (UniqueName: \"kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.135397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.237322 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.237404 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.237423 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.237451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbl6k\" (UniqueName: \"kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.245349 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.249008 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.250999 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.270644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.270717 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.281237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbl6k\" (UniqueName: \"kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k\") pod \"nova-cell0-cell-mapping-4q56p\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.285053 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.318418 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.345282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz95l\" (UniqueName: \"kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.345426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.345459 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.345550 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.383014 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.439216 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.440726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.447769 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz95l\" (UniqueName: \"kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.447855 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.447882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.447929 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.448509 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.456879 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.461240 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.462750 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.464632 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.465249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.467399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.489760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz95l\" (UniqueName: \"kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l\") pod \"nova-metadata-0\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.500192 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.504426 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.505817 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.520065 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.520683 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.553426 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555178 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555425 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555616 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkj8\" (UniqueName: \"kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555669 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555722 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzbb\" (UniqueName: \"kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kw8\" (UniqueName: \"kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555824 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555882 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.555907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.558346 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.578990 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664716 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkj8\" (UniqueName: \"kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664787 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzbb\" (UniqueName: \"kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664806 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664826 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kw8\" (UniqueName: \"kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664862 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664880 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664920 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6f5\" (UniqueName: \"kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.664981 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.665010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.665034 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.665867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.666169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.667086 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.667494 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.668215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.668779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.669982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.672952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.673081 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.679067 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.683287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkj8\" (UniqueName: \"kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8\") pod \"nova-api-0\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.683669 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kw8\" (UniqueName: \"kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8\") pod \"nova-scheduler-0\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.688702 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzbb\" (UniqueName: \"kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb\") pod \"dnsmasq-dns-75ddbf7c75-pb2s2\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.702994 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.766968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6f5\" (UniqueName: \"kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.767102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.767189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.771444 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.772591 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.783842 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6f5\" (UniqueName: \"kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.893926 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.918888 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.933405 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.944324 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:06 crc kubenswrapper[4780]: I0219 08:42:06.991436 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4q56p"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.162606 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:07 crc kubenswrapper[4780]: W0219 08:42:07.198509 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc1b251_c528_4e9e_870a_c06efda64bb4.slice/crio-f290d16a2de1989cf2d22d8eb853eb4731533334bb58a5b5c83d0eed06713aad WatchSource:0}: Error finding container f290d16a2de1989cf2d22d8eb853eb4731533334bb58a5b5c83d0eed06713aad: Status 404 returned error can't find the container with id f290d16a2de1989cf2d22d8eb853eb4731533334bb58a5b5c83d0eed06713aad Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.326463 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bv9tc"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.327729 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.329775 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.330122 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.351555 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bv9tc"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.383287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.383460 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.383496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.383590 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmkt\" (UniqueName: \"kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.485731 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.485866 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.485904 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.485973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkmkt\" (UniqueName: \"kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.492303 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.492879 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.499314 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.508984 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkmkt\" (UniqueName: \"kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt\") pod \"nova-cell1-conductor-db-sync-bv9tc\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.736951 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.779941 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerStarted","Data":"f290d16a2de1989cf2d22d8eb853eb4731533334bb58a5b5c83d0eed06713aad"} Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.782019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4q56p" event={"ID":"490019fb-c322-4355-b6c6-5eb9eaba34ca","Type":"ContainerStarted","Data":"c0da2919c8a8269894ab28300296cfe09a550a15aab73746a5abbe2f79a6020e"} Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.782067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4q56p" event={"ID":"490019fb-c322-4355-b6c6-5eb9eaba34ca","Type":"ContainerStarted","Data":"6e978c20e3d28f56b7ffd362b4acda21fa0c6cf9ee5aa3921021cd2bfdd1a875"} Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.813222 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4q56p" podStartSLOduration=1.8132032200000001 podStartE2EDuration="1.81320322s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:07.802449684 +0000 UTC m=+1270.546107133" watchObservedRunningTime="2026-02-19 08:42:07.81320322 +0000 UTC m=+1270.556860669" Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.898989 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.908973 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.919598 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:07 crc kubenswrapper[4780]: I0219 08:42:07.927662 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.212887 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bv9tc"] Feb 19 08:42:08 crc kubenswrapper[4780]: W0219 08:42:08.244837 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d5b732f_e5c7_4bec_8c32_4d16e07ce21a.slice/crio-ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5 WatchSource:0}: Error finding container ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5: Status 404 returned error can't find the container with id ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5 Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.795672 4780 generic.go:334] "Generic (PLEG): container finished" podID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerID="1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3" exitCode=0 Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.795785 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" event={"ID":"bd2019cd-bf0c-411f-855c-9f93dcd39d26","Type":"ContainerDied","Data":"1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.803373 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" event={"ID":"bd2019cd-bf0c-411f-855c-9f93dcd39d26","Type":"ContainerStarted","Data":"5f5a69c3d29cc5edf959c237c9d4caa7c0157c6d19bb6792a057c2a34e3d45e2"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.805096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" event={"ID":"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a","Type":"ContainerStarted","Data":"c1e91a87f73224be9b1e1c661e6be1cc05ace2a3bc8a0c6cc1bf125f0b7a0238"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.805140 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" event={"ID":"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a","Type":"ContainerStarted","Data":"ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.807397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13","Type":"ContainerStarted","Data":"4ad5f5ea13ca4cc7dca8f25f43a6c8c414925cae92cb4b4381307f56c8b54dd2"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.809506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerStarted","Data":"e60db7cd6ef344734ad733588de3e79ec21683744165a047d630a6adba13b068"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.811357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c24ac63b-902e-402b-a8bb-5468f6ccad62","Type":"ContainerStarted","Data":"a12b6a444fc9127bbe6869ad95c2baba57a0195a7f813db9373f5a9ee3fe8e3f"} Feb 19 08:42:08 crc kubenswrapper[4780]: I0219 08:42:08.853828 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" podStartSLOduration=1.853808795 podStartE2EDuration="1.853808795s" podCreationTimestamp="2026-02-19 08:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:08.846288599 +0000 UTC m=+1271.589946048" watchObservedRunningTime="2026-02-19 08:42:08.853808795 +0000 UTC m=+1271.597466244" Feb 19 08:42:10 crc kubenswrapper[4780]: I0219 08:42:10.121231 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:10 crc kubenswrapper[4780]: I0219 08:42:10.138934 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.850366 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerStarted","Data":"38040ea56b225242c2e5c9cae8b740be82727750e90fd9c3aaba7666320b30fe"} Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.864785 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c24ac63b-902e-402b-a8bb-5468f6ccad62","Type":"ContainerStarted","Data":"1de7dd1d38ac3ce7143b43768b32bfccd71711cf336d1f62a1ca7c2a1768a1e9"} Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.864931 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c24ac63b-902e-402b-a8bb-5468f6ccad62" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1de7dd1d38ac3ce7143b43768b32bfccd71711cf336d1f62a1ca7c2a1768a1e9" gracePeriod=30 Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.869716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" event={"ID":"bd2019cd-bf0c-411f-855c-9f93dcd39d26","Type":"ContainerStarted","Data":"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd"} Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.869885 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.880207 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13","Type":"ContainerStarted","Data":"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd"} Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.882304 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerStarted","Data":"99a7f5081aa662d6253cfb1d9cd4767f9e0834822cf985518fd1020150d4b30f"} Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.894875 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.4979802429999998 podStartE2EDuration="5.894852624s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="2026-02-19 08:42:07.901533423 +0000 UTC m=+1270.645190872" lastFinishedPulling="2026-02-19 08:42:11.298405794 +0000 UTC m=+1274.042063253" observedRunningTime="2026-02-19 08:42:11.883343829 +0000 UTC m=+1274.627001278" watchObservedRunningTime="2026-02-19 08:42:11.894852624 +0000 UTC m=+1274.638510073" Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.894917 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.907170 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" podStartSLOduration=5.907152198 podStartE2EDuration="5.907152198s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:11.905262991 +0000 UTC m=+1274.648920440" watchObservedRunningTime="2026-02-19 08:42:11.907152198 +0000 UTC m=+1274.650809647" Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.924553 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.473005984 podStartE2EDuration="5.924511886s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="2026-02-19 08:42:07.838869384 +0000 UTC m=+1270.582526833" lastFinishedPulling="2026-02-19 08:42:11.290375246 +0000 UTC m=+1274.034032735" observedRunningTime="2026-02-19 08:42:11.918432396 +0000 UTC m=+1274.662089855" watchObservedRunningTime="2026-02-19 08:42:11.924511886 +0000 UTC m=+1274.668169335" Feb 19 08:42:11 crc kubenswrapper[4780]: I0219 08:42:11.952669 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.895053 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerStarted","Data":"162727be53a65a5a11494fc1a64265fba4083bde0e36529503d27f63a7a8272b"} Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.895193 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-log" containerID="cri-o://99a7f5081aa662d6253cfb1d9cd4767f9e0834822cf985518fd1020150d4b30f" gracePeriod=30 Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.895334 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-metadata" containerID="cri-o://162727be53a65a5a11494fc1a64265fba4083bde0e36529503d27f63a7a8272b" gracePeriod=30 Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.900185 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerStarted","Data":"444e0f110cd648e5d3384fce038e8395504e0a4acdfad5c0a2771d0cb9525f7f"} Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.933620 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.834115328 podStartE2EDuration="6.933410438s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="2026-02-19 08:42:07.202616881 +0000 UTC m=+1269.946274330" lastFinishedPulling="2026-02-19 08:42:11.301911951 +0000 UTC m=+1274.045569440" observedRunningTime="2026-02-19 08:42:12.925100773 +0000 UTC m=+1275.668758222" watchObservedRunningTime="2026-02-19 08:42:12.933410438 +0000 UTC m=+1275.677067877" Feb 19 08:42:12 crc kubenswrapper[4780]: I0219 08:42:12.967872 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.589798482 podStartE2EDuration="6.967852639s" podCreationTimestamp="2026-02-19 08:42:06 +0000 UTC" firstStartedPulling="2026-02-19 08:42:07.911228762 +0000 UTC m=+1270.654886211" lastFinishedPulling="2026-02-19 08:42:11.289282899 +0000 UTC m=+1274.032940368" observedRunningTime="2026-02-19 08:42:12.954747015 +0000 UTC m=+1275.698404504" watchObservedRunningTime="2026-02-19 08:42:12.967852639 +0000 UTC m=+1275.711510088" Feb 19 08:42:13 crc kubenswrapper[4780]: I0219 08:42:13.942548 4780 generic.go:334] "Generic (PLEG): container finished" podID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerID="162727be53a65a5a11494fc1a64265fba4083bde0e36529503d27f63a7a8272b" exitCode=0 Feb 19 08:42:13 crc kubenswrapper[4780]: I0219 08:42:13.942857 4780 generic.go:334] "Generic (PLEG): container finished" podID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerID="99a7f5081aa662d6253cfb1d9cd4767f9e0834822cf985518fd1020150d4b30f" exitCode=143 Feb 19 08:42:13 crc kubenswrapper[4780]: I0219 08:42:13.944755 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerDied","Data":"162727be53a65a5a11494fc1a64265fba4083bde0e36529503d27f63a7a8272b"} Feb 19 08:42:13 crc kubenswrapper[4780]: I0219 08:42:13.945120 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerDied","Data":"99a7f5081aa662d6253cfb1d9cd4767f9e0834822cf985518fd1020150d4b30f"} Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.069896 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.256377 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz95l\" (UniqueName: \"kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l\") pod \"fdc1b251-c528-4e9e-870a-c06efda64bb4\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.257459 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle\") pod \"fdc1b251-c528-4e9e-870a-c06efda64bb4\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.257572 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs\") pod \"fdc1b251-c528-4e9e-870a-c06efda64bb4\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.257620 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data\") pod \"fdc1b251-c528-4e9e-870a-c06efda64bb4\" (UID: \"fdc1b251-c528-4e9e-870a-c06efda64bb4\") " Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.257889 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs" (OuterVolumeSpecName: "logs") pod "fdc1b251-c528-4e9e-870a-c06efda64bb4" (UID: "fdc1b251-c528-4e9e-870a-c06efda64bb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.258437 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1b251-c528-4e9e-870a-c06efda64bb4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.272218 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l" (OuterVolumeSpecName: "kube-api-access-cz95l") pod "fdc1b251-c528-4e9e-870a-c06efda64bb4" (UID: "fdc1b251-c528-4e9e-870a-c06efda64bb4"). InnerVolumeSpecName "kube-api-access-cz95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.296069 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data" (OuterVolumeSpecName: "config-data") pod "fdc1b251-c528-4e9e-870a-c06efda64bb4" (UID: "fdc1b251-c528-4e9e-870a-c06efda64bb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.318943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdc1b251-c528-4e9e-870a-c06efda64bb4" (UID: "fdc1b251-c528-4e9e-870a-c06efda64bb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.359812 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.360039 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz95l\" (UniqueName: \"kubernetes.io/projected/fdc1b251-c528-4e9e-870a-c06efda64bb4-kube-api-access-cz95l\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.360117 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1b251-c528-4e9e-870a-c06efda64bb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.498971 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.956009 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdc1b251-c528-4e9e-870a-c06efda64bb4","Type":"ContainerDied","Data":"f290d16a2de1989cf2d22d8eb853eb4731533334bb58a5b5c83d0eed06713aad"} Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.956069 4780 scope.go:117] "RemoveContainer" containerID="162727be53a65a5a11494fc1a64265fba4083bde0e36529503d27f63a7a8272b" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.956249 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:14 crc kubenswrapper[4780]: I0219 08:42:14.984835 4780 scope.go:117] "RemoveContainer" containerID="99a7f5081aa662d6253cfb1d9cd4767f9e0834822cf985518fd1020150d4b30f" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.004415 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.017011 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.034225 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:15 crc kubenswrapper[4780]: E0219 08:42:15.035253 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-metadata" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.035297 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-metadata" Feb 19 08:42:15 crc kubenswrapper[4780]: E0219 08:42:15.035359 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-log" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.035376 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-log" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.035958 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-log" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.036026 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" containerName="nova-metadata-metadata" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.038406 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.041647 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.043005 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.046568 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.075727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.075968 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.076090 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.076243 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.076410 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qk4\" (UniqueName: \"kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.178539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.178660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.178719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.178989 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.180071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8qk4\" (UniqueName: \"kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.180517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.190798 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.191796 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.192395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.202830 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8qk4\" (UniqueName: \"kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4\") pod \"nova-metadata-0\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.377034 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.873583 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.955228 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc1b251-c528-4e9e-870a-c06efda64bb4" path="/var/lib/kubelet/pods/fdc1b251-c528-4e9e-870a-c06efda64bb4/volumes" Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.974577 4780 generic.go:334] "Generic (PLEG): container finished" podID="490019fb-c322-4355-b6c6-5eb9eaba34ca" containerID="c0da2919c8a8269894ab28300296cfe09a550a15aab73746a5abbe2f79a6020e" exitCode=0 Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.974663 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4q56p" event={"ID":"490019fb-c322-4355-b6c6-5eb9eaba34ca","Type":"ContainerDied","Data":"c0da2919c8a8269894ab28300296cfe09a550a15aab73746a5abbe2f79a6020e"} Feb 19 08:42:15 crc kubenswrapper[4780]: I0219 08:42:15.976745 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerStarted","Data":"56d7e12a59141a3b51cd8046ca81acbd900e0956632585f1576d6f8a20cec3a9"} Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.895469 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.920253 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.934286 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.934356 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.936463 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.990875 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:42:16 crc kubenswrapper[4780]: I0219 08:42:16.991328 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="dnsmasq-dns" containerID="cri-o://b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529" gracePeriod=10 Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.014346 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerStarted","Data":"2efc8c3e15034006e8514b46adc15405a1196f9ddba5112d42b6903fb77a683a"} Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.014627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerStarted","Data":"3e2a3b3a37e03229550d0c5ce92b5d249ceaaa67dbb01958180b8d24dc847a5a"} Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.068402 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.06838284 podStartE2EDuration="2.06838284s" podCreationTimestamp="2026-02-19 08:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:17.049494983 +0000 UTC m=+1279.793152442" watchObservedRunningTime="2026-02-19 08:42:17.06838284 +0000 UTC m=+1279.812040289" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.101769 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.513904 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.594718 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts\") pod \"490019fb-c322-4355-b6c6-5eb9eaba34ca\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.594884 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle\") pod \"490019fb-c322-4355-b6c6-5eb9eaba34ca\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.594905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data\") pod \"490019fb-c322-4355-b6c6-5eb9eaba34ca\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.595018 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbl6k\" (UniqueName: \"kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k\") pod \"490019fb-c322-4355-b6c6-5eb9eaba34ca\" (UID: \"490019fb-c322-4355-b6c6-5eb9eaba34ca\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.610245 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts" (OuterVolumeSpecName: "scripts") pod "490019fb-c322-4355-b6c6-5eb9eaba34ca" (UID: "490019fb-c322-4355-b6c6-5eb9eaba34ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.611299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k" (OuterVolumeSpecName: "kube-api-access-wbl6k") pod "490019fb-c322-4355-b6c6-5eb9eaba34ca" (UID: "490019fb-c322-4355-b6c6-5eb9eaba34ca"). InnerVolumeSpecName "kube-api-access-wbl6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.642600 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data" (OuterVolumeSpecName: "config-data") pod "490019fb-c322-4355-b6c6-5eb9eaba34ca" (UID: "490019fb-c322-4355-b6c6-5eb9eaba34ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.697794 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbl6k\" (UniqueName: \"kubernetes.io/projected/490019fb-c322-4355-b6c6-5eb9eaba34ca-kube-api-access-wbl6k\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.697821 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.697830 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.699403 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "490019fb-c322-4355-b6c6-5eb9eaba34ca" (UID: "490019fb-c322-4355-b6c6-5eb9eaba34ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.802181 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.802342 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490019fb-c322-4355-b6c6-5eb9eaba34ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.903644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.903705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.903784 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.903845 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.903924 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8cmm\" (UniqueName: \"kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.904019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb\") pod \"7579a699-9f79-465c-8161-2cec1aca0af1\" (UID: \"7579a699-9f79-465c-8161-2cec1aca0af1\") " Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.911243 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm" (OuterVolumeSpecName: "kube-api-access-l8cmm") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "kube-api-access-l8cmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.955174 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.955402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config" (OuterVolumeSpecName: "config") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.958196 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.958767 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:17 crc kubenswrapper[4780]: I0219 08:42:17.959096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7579a699-9f79-465c-8161-2cec1aca0af1" (UID: "7579a699-9f79-465c-8161-2cec1aca0af1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006264 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006298 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006310 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006323 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006334 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8cmm\" (UniqueName: \"kubernetes.io/projected/7579a699-9f79-465c-8161-2cec1aca0af1-kube-api-access-l8cmm\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.006344 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7579a699-9f79-465c-8161-2cec1aca0af1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.017277 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.017498 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.033906 4780 generic.go:334] "Generic (PLEG): container finished" podID="7579a699-9f79-465c-8161-2cec1aca0af1" containerID="b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529" exitCode=0 Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.033990 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.034018 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" event={"ID":"7579a699-9f79-465c-8161-2cec1aca0af1","Type":"ContainerDied","Data":"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529"} Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.035163 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j" event={"ID":"7579a699-9f79-465c-8161-2cec1aca0af1","Type":"ContainerDied","Data":"866da110345843fd426523c8da510bc057c394857b1bb305ec1a9ec89bd511b8"} Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.035189 4780 scope.go:117] "RemoveContainer" containerID="b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.043716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4q56p" event={"ID":"490019fb-c322-4355-b6c6-5eb9eaba34ca","Type":"ContainerDied","Data":"6e978c20e3d28f56b7ffd362b4acda21fa0c6cf9ee5aa3921021cd2bfdd1a875"} Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.043825 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e978c20e3d28f56b7ffd362b4acda21fa0c6cf9ee5aa3921021cd2bfdd1a875" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.043728 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4q56p" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.063868 4780 scope.go:117] "RemoveContainer" containerID="e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.070338 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.079091 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-mwj9j"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.083804 4780 scope.go:117] "RemoveContainer" containerID="b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529" Feb 19 08:42:18 crc kubenswrapper[4780]: E0219 08:42:18.085055 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529\": container with ID starting with b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529 not found: ID does not exist" containerID="b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.085149 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529"} err="failed to get container status \"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529\": rpc error: code = NotFound desc = could not find container \"b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529\": container with ID starting with b57f88ebda65d55736c2638c8fcc07cf62db23b63de4e2f51cb43274ff75d529 not found: ID does not exist" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.085223 4780 scope.go:117] "RemoveContainer" containerID="e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a" Feb 19 08:42:18 crc kubenswrapper[4780]: E0219 08:42:18.085506 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a\": container with ID starting with e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a not found: ID does not exist" containerID="e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.085524 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a"} err="failed to get container status \"e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a\": rpc error: code = NotFound desc = could not find container \"e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a\": container with ID starting with e8c0d31353d8ab94bc330d7ee23101696326a7a5af1917efc38c406112fde78a not found: ID does not exist" Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.134641 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.134949 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-log" containerID="cri-o://38040ea56b225242c2e5c9cae8b740be82727750e90fd9c3aaba7666320b30fe" gracePeriod=30 Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.135090 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-api" containerID="cri-o://444e0f110cd648e5d3384fce038e8395504e0a4acdfad5c0a2771d0cb9525f7f" gracePeriod=30 Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.153013 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.295434 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.642539 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:18 crc kubenswrapper[4780]: I0219 08:42:18.642740 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9a785445-258d-4c77-a8e3-294ba1f0aca3" containerName="kube-state-metrics" containerID="cri-o://b3f6d237e46b0b3bb57611a162e3454e7362a84be107df1aaddb897ff7b77d95" gracePeriod=30 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.054661 4780 generic.go:334] "Generic (PLEG): container finished" podID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerID="38040ea56b225242c2e5c9cae8b740be82727750e90fd9c3aaba7666320b30fe" exitCode=143 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.054956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerDied","Data":"38040ea56b225242c2e5c9cae8b740be82727750e90fd9c3aaba7666320b30fe"} Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.058386 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a785445-258d-4c77-a8e3-294ba1f0aca3" containerID="b3f6d237e46b0b3bb57611a162e3454e7362a84be107df1aaddb897ff7b77d95" exitCode=2 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.058501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a785445-258d-4c77-a8e3-294ba1f0aca3","Type":"ContainerDied","Data":"b3f6d237e46b0b3bb57611a162e3454e7362a84be107df1aaddb897ff7b77d95"} Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.060047 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" containerName="nova-scheduler-scheduler" containerID="cri-o://8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd" gracePeriod=30 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.060670 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-log" containerID="cri-o://3e2a3b3a37e03229550d0c5ce92b5d249ceaaa67dbb01958180b8d24dc847a5a" gracePeriod=30 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.060707 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-metadata" containerID="cri-o://2efc8c3e15034006e8514b46adc15405a1196f9ddba5112d42b6903fb77a683a" gracePeriod=30 Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.244539 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.331174 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggrr8\" (UniqueName: \"kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8\") pod \"9a785445-258d-4c77-a8e3-294ba1f0aca3\" (UID: \"9a785445-258d-4c77-a8e3-294ba1f0aca3\") " Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.346173 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8" (OuterVolumeSpecName: "kube-api-access-ggrr8") pod "9a785445-258d-4c77-a8e3-294ba1f0aca3" (UID: "9a785445-258d-4c77-a8e3-294ba1f0aca3"). InnerVolumeSpecName "kube-api-access-ggrr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.433747 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggrr8\" (UniqueName: \"kubernetes.io/projected/9a785445-258d-4c77-a8e3-294ba1f0aca3-kube-api-access-ggrr8\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:19 crc kubenswrapper[4780]: I0219 08:42:19.969282 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" path="/var/lib/kubelet/pods/7579a699-9f79-465c-8161-2cec1aca0af1/volumes" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.102428 4780 generic.go:334] "Generic (PLEG): container finished" podID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerID="2efc8c3e15034006e8514b46adc15405a1196f9ddba5112d42b6903fb77a683a" exitCode=0 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.102467 4780 generic.go:334] "Generic (PLEG): container finished" podID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerID="3e2a3b3a37e03229550d0c5ce92b5d249ceaaa67dbb01958180b8d24dc847a5a" exitCode=143 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.102509 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerDied","Data":"2efc8c3e15034006e8514b46adc15405a1196f9ddba5112d42b6903fb77a683a"} Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.102553 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerDied","Data":"3e2a3b3a37e03229550d0c5ce92b5d249ceaaa67dbb01958180b8d24dc847a5a"} Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.105150 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a785445-258d-4c77-a8e3-294ba1f0aca3","Type":"ContainerDied","Data":"48b5ea0a6fb874126eaae00a3bb03f3cdc6514dcd0139bacbdbb9a46eeadbc09"} Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.105223 4780 scope.go:117] "RemoveContainer" containerID="b3f6d237e46b0b3bb57611a162e3454e7362a84be107df1aaddb897ff7b77d95" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.105301 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.139173 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.154203 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.161950 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:20 crc kubenswrapper[4780]: E0219 08:42:20.162371 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="dnsmasq-dns" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162387 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="dnsmasq-dns" Feb 19 08:42:20 crc kubenswrapper[4780]: E0219 08:42:20.162404 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a785445-258d-4c77-a8e3-294ba1f0aca3" containerName="kube-state-metrics" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162410 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a785445-258d-4c77-a8e3-294ba1f0aca3" containerName="kube-state-metrics" Feb 19 08:42:20 crc kubenswrapper[4780]: E0219 08:42:20.162420 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490019fb-c322-4355-b6c6-5eb9eaba34ca" containerName="nova-manage" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162426 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="490019fb-c322-4355-b6c6-5eb9eaba34ca" containerName="nova-manage" Feb 19 08:42:20 crc kubenswrapper[4780]: E0219 08:42:20.162450 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="init" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162455 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="init" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162626 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="490019fb-c322-4355-b6c6-5eb9eaba34ca" containerName="nova-manage" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162640 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7579a699-9f79-465c-8161-2cec1aca0af1" containerName="dnsmasq-dns" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.162654 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a785445-258d-4c77-a8e3-294ba1f0aca3" containerName="kube-state-metrics" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.163278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.164863 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.165115 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.177291 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.356417 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.359325 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.359382 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pr56\" (UniqueName: \"kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.359500 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.359533 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data\") pod \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461075 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs\") pod \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8qk4\" (UniqueName: \"kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4\") pod \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461357 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs\") pod \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461386 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle\") pod \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\" (UID: \"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597\") " Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461659 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pr56\" (UniqueName: \"kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461731 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.461755 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.463061 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs" (OuterVolumeSpecName: "logs") pod "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" (UID: "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.466297 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4" (OuterVolumeSpecName: "kube-api-access-w8qk4") pod "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" (UID: "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597"). InnerVolumeSpecName "kube-api-access-w8qk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.466934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.467110 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.469088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.482487 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pr56\" (UniqueName: \"kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56\") pod \"kube-state-metrics-0\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " pod="openstack/kube-state-metrics-0" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.490575 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data" (OuterVolumeSpecName: "config-data") pod "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" (UID: "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.492255 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" (UID: "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.512445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" (UID: "1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.563900 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8qk4\" (UniqueName: \"kubernetes.io/projected/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-kube-api-access-w8qk4\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.563937 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.563951 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.563966 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.563978 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.567053 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.567411 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-central-agent" containerID="cri-o://7fd9123c3c93cf97de952f4968c96a4b69a44f2e5159ea4589984dd56c117f1e" gracePeriod=30 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.567455 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="sg-core" containerID="cri-o://bc5ef7ef4bd750a4764ce70848d7c646155afde6f4f12a3f795f04acd33850e5" gracePeriod=30 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.567543 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="proxy-httpd" containerID="cri-o://1c5c0ed331390b6094b925152821cd290354a6796a55cd56bab6e1424fa74311" gracePeriod=30 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.567498 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-notification-agent" containerID="cri-o://488d4e0a6038546d21ae96cb49f871e3d307c55c4627631365523f8c0e3d5704" gracePeriod=30 Feb 19 08:42:20 crc kubenswrapper[4780]: I0219 08:42:20.779707 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.113973 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.113964 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597","Type":"ContainerDied","Data":"56d7e12a59141a3b51cd8046ca81acbd900e0956632585f1576d6f8a20cec3a9"} Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.114576 4780 scope.go:117] "RemoveContainer" containerID="2efc8c3e15034006e8514b46adc15405a1196f9ddba5112d42b6903fb77a683a" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119779 4780 generic.go:334] "Generic (PLEG): container finished" podID="eea93a11-04b6-4394-9f72-576540d36f5d" containerID="1c5c0ed331390b6094b925152821cd290354a6796a55cd56bab6e1424fa74311" exitCode=0 Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119800 4780 generic.go:334] "Generic (PLEG): container finished" podID="eea93a11-04b6-4394-9f72-576540d36f5d" containerID="bc5ef7ef4bd750a4764ce70848d7c646155afde6f4f12a3f795f04acd33850e5" exitCode=2 Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119808 4780 generic.go:334] "Generic (PLEG): container finished" podID="eea93a11-04b6-4394-9f72-576540d36f5d" containerID="7fd9123c3c93cf97de952f4968c96a4b69a44f2e5159ea4589984dd56c117f1e" exitCode=0 Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerDied","Data":"1c5c0ed331390b6094b925152821cd290354a6796a55cd56bab6e1424fa74311"} Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerDied","Data":"bc5ef7ef4bd750a4764ce70848d7c646155afde6f4f12a3f795f04acd33850e5"} Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.119885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerDied","Data":"7fd9123c3c93cf97de952f4968c96a4b69a44f2e5159ea4589984dd56c117f1e"} Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.141523 4780 scope.go:117] "RemoveContainer" containerID="3e2a3b3a37e03229550d0c5ce92b5d249ceaaa67dbb01958180b8d24dc847a5a" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.164107 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.186190 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.198164 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:21 crc kubenswrapper[4780]: E0219 08:42:21.198540 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-log" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.198558 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-log" Feb 19 08:42:21 crc kubenswrapper[4780]: E0219 08:42:21.198578 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-metadata" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.198586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-metadata" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.198736 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-log" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.198750 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" containerName="nova-metadata-metadata" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.199600 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.203302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.203402 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.211654 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.240342 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:42:21 crc kubenswrapper[4780]: W0219 08:42:21.298065 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda27398f8_93a8_47a9_a517_b161dad9cc11.slice/crio-a70aab4d7cef4926970f5bcbc8df9a9712c0b0107361ea53c5b644c4fa30626b WatchSource:0}: Error finding container a70aab4d7cef4926970f5bcbc8df9a9712c0b0107361ea53c5b644c4fa30626b: Status 404 returned error can't find the container with id a70aab4d7cef4926970f5bcbc8df9a9712c0b0107361ea53c5b644c4fa30626b Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.386926 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.387303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.387354 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.388085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.388356 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczqt\" (UniqueName: \"kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.490566 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczqt\" (UniqueName: \"kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.490688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.490722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.490743 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.490785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.491322 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.496305 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.496895 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.498712 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.517581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczqt\" (UniqueName: \"kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt\") pod \"nova-metadata-0\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.582063 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.698041 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.897857 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data\") pod \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.898182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle\") pod \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.898232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9kw8\" (UniqueName: \"kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8\") pod \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\" (UID: \"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13\") " Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.905378 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8" (OuterVolumeSpecName: "kube-api-access-c9kw8") pod "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" (UID: "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13"). InnerVolumeSpecName "kube-api-access-c9kw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.936569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data" (OuterVolumeSpecName: "config-data") pod "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" (UID: "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:21 crc kubenswrapper[4780]: I0219 08:42:21.988775 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" (UID: "ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.000243 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597" path="/var/lib/kubelet/pods/1d37aeec-dab2-4ad7-ab11-0ebdcbf8e597/volumes" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.000869 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a785445-258d-4c77-a8e3-294ba1f0aca3" path="/var/lib/kubelet/pods/9a785445-258d-4c77-a8e3-294ba1f0aca3/volumes" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.011938 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.011974 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9kw8\" (UniqueName: \"kubernetes.io/projected/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-kube-api-access-c9kw8\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.011989 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:22 crc kubenswrapper[4780]: W0219 08:42:22.083088 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d54d71_4d2f_44ec_bf81_53a184bdb557.slice/crio-b00f3a71f246d59d172145ffb5402dd2ae4352b6237288af2fe962d46ad6f0ac WatchSource:0}: Error finding container b00f3a71f246d59d172145ffb5402dd2ae4352b6237288af2fe962d46ad6f0ac: Status 404 returned error can't find the container with id b00f3a71f246d59d172145ffb5402dd2ae4352b6237288af2fe962d46ad6f0ac Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.087280 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.130504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerStarted","Data":"b00f3a71f246d59d172145ffb5402dd2ae4352b6237288af2fe962d46ad6f0ac"} Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.134921 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" containerID="8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd" exitCode=0 Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.134977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13","Type":"ContainerDied","Data":"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd"} Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.135004 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13","Type":"ContainerDied","Data":"4ad5f5ea13ca4cc7dca8f25f43a6c8c414925cae92cb4b4381307f56c8b54dd2"} Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.135021 4780 scope.go:117] "RemoveContainer" containerID="8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.135164 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.140409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a27398f8-93a8-47a9-a517-b161dad9cc11","Type":"ContainerStarted","Data":"a70aab4d7cef4926970f5bcbc8df9a9712c0b0107361ea53c5b644c4fa30626b"} Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.163224 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.176341 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.187789 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:22 crc kubenswrapper[4780]: E0219 08:42:22.188293 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" containerName="nova-scheduler-scheduler" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.188309 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" containerName="nova-scheduler-scheduler" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.188471 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" containerName="nova-scheduler-scheduler" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.189168 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.195060 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.206647 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.219480 4780 scope.go:117] "RemoveContainer" containerID="8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd" Feb 19 08:42:22 crc kubenswrapper[4780]: E0219 08:42:22.221757 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd\": container with ID starting with 8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd not found: ID does not exist" containerID="8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.221787 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd"} err="failed to get container status \"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd\": rpc error: code = NotFound desc = could not find container \"8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd\": container with ID starting with 8832e6b15bec684e65b5a4ddb1e76c9b8d019e6e7a05c6d559d08404ac8408bd not found: ID does not exist" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.316797 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c866\" (UniqueName: \"kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.316938 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.316988 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.419300 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.419936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.420176 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c866\" (UniqueName: \"kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.424316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.427696 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.437827 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c866\" (UniqueName: \"kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866\") pod \"nova-scheduler-0\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " pod="openstack/nova-scheduler-0" Feb 19 08:42:22 crc kubenswrapper[4780]: I0219 08:42:22.594949 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.102196 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:42:23 crc kubenswrapper[4780]: W0219 08:42:23.106698 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddca213b7_a7fc_4a51_816e_69bb4b586520.slice/crio-b57eeb753c63d83d5e9aad7a2b4c3cce7cbb07360b0b47fab5625c7536e7c542 WatchSource:0}: Error finding container b57eeb753c63d83d5e9aad7a2b4c3cce7cbb07360b0b47fab5625c7536e7c542: Status 404 returned error can't find the container with id b57eeb753c63d83d5e9aad7a2b4c3cce7cbb07360b0b47fab5625c7536e7c542 Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.152351 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a27398f8-93a8-47a9-a517-b161dad9cc11","Type":"ContainerStarted","Data":"d0d0ad671ef9d17b1605ad8b7bc48a11301a49a0cc5f0ee6915c47281564ebce"} Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.152416 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.155449 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerStarted","Data":"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba"} Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.156649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dca213b7-a7fc-4a51-816e-69bb4b586520","Type":"ContainerStarted","Data":"b57eeb753c63d83d5e9aad7a2b4c3cce7cbb07360b0b47fab5625c7536e7c542"} Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.172942 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.39266855 podStartE2EDuration="3.172921672s" podCreationTimestamp="2026-02-19 08:42:20 +0000 UTC" firstStartedPulling="2026-02-19 08:42:21.306416257 +0000 UTC m=+1284.050073706" lastFinishedPulling="2026-02-19 08:42:22.086669389 +0000 UTC m=+1284.830326828" observedRunningTime="2026-02-19 08:42:23.164526385 +0000 UTC m=+1285.908183824" watchObservedRunningTime="2026-02-19 08:42:23.172921672 +0000 UTC m=+1285.916579121" Feb 19 08:42:23 crc kubenswrapper[4780]: I0219 08:42:23.949576 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13" path="/var/lib/kubelet/pods/ac9ae4c6-bd60-4eb9-9e96-878bb05c4f13/volumes" Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.170487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerStarted","Data":"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c"} Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.174023 4780 generic.go:334] "Generic (PLEG): container finished" podID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerID="444e0f110cd648e5d3384fce038e8395504e0a4acdfad5c0a2771d0cb9525f7f" exitCode=0 Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.174078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerDied","Data":"444e0f110cd648e5d3384fce038e8395504e0a4acdfad5c0a2771d0cb9525f7f"} Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.177580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dca213b7-a7fc-4a51-816e-69bb4b586520","Type":"ContainerStarted","Data":"b6f26a651ab0f1793c7bf0c31982d1f809f1110c20c3b9d37e0f0efbe9c0ebe7"} Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.206230 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.206208606 podStartE2EDuration="3.206208606s" podCreationTimestamp="2026-02-19 08:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:24.201351046 +0000 UTC m=+1286.945008505" watchObservedRunningTime="2026-02-19 08:42:24.206208606 +0000 UTC m=+1286.949866055" Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.221436 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.221422672 podStartE2EDuration="2.221422672s" podCreationTimestamp="2026-02-19 08:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:24.219737721 +0000 UTC m=+1286.963395170" watchObservedRunningTime="2026-02-19 08:42:24.221422672 +0000 UTC m=+1286.965080121" Feb 19 08:42:24 crc kubenswrapper[4780]: I0219 08:42:24.906007 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.076667 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle\") pod \"9bd54f39-dfc3-4015-809e-814ff2c9782b\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.076751 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs\") pod \"9bd54f39-dfc3-4015-809e-814ff2c9782b\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.076808 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data\") pod \"9bd54f39-dfc3-4015-809e-814ff2c9782b\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.076861 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkj8\" (UniqueName: \"kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8\") pod \"9bd54f39-dfc3-4015-809e-814ff2c9782b\" (UID: \"9bd54f39-dfc3-4015-809e-814ff2c9782b\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.077578 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs" (OuterVolumeSpecName: "logs") pod "9bd54f39-dfc3-4015-809e-814ff2c9782b" (UID: "9bd54f39-dfc3-4015-809e-814ff2c9782b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.085366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8" (OuterVolumeSpecName: "kube-api-access-vmkj8") pod "9bd54f39-dfc3-4015-809e-814ff2c9782b" (UID: "9bd54f39-dfc3-4015-809e-814ff2c9782b"). InnerVolumeSpecName "kube-api-access-vmkj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.129649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data" (OuterVolumeSpecName: "config-data") pod "9bd54f39-dfc3-4015-809e-814ff2c9782b" (UID: "9bd54f39-dfc3-4015-809e-814ff2c9782b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.141357 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd54f39-dfc3-4015-809e-814ff2c9782b" (UID: "9bd54f39-dfc3-4015-809e-814ff2c9782b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.182434 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.182473 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd54f39-dfc3-4015-809e-814ff2c9782b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.182486 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd54f39-dfc3-4015-809e-814ff2c9782b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.182497 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkj8\" (UniqueName: \"kubernetes.io/projected/9bd54f39-dfc3-4015-809e-814ff2c9782b-kube-api-access-vmkj8\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.190634 4780 generic.go:334] "Generic (PLEG): container finished" podID="eea93a11-04b6-4394-9f72-576540d36f5d" containerID="488d4e0a6038546d21ae96cb49f871e3d307c55c4627631365523f8c0e3d5704" exitCode=0 Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.190718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerDied","Data":"488d4e0a6038546d21ae96cb49f871e3d307c55c4627631365523f8c0e3d5704"} Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.197309 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9bd54f39-dfc3-4015-809e-814ff2c9782b","Type":"ContainerDied","Data":"e60db7cd6ef344734ad733588de3e79ec21683744165a047d630a6adba13b068"} Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.197351 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.197436 4780 scope.go:117] "RemoveContainer" containerID="444e0f110cd648e5d3384fce038e8395504e0a4acdfad5c0a2771d0cb9525f7f" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.249754 4780 scope.go:117] "RemoveContainer" containerID="38040ea56b225242c2e5c9cae8b740be82727750e90fd9c3aaba7666320b30fe" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.306597 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.315032 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.330203 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:25 crc kubenswrapper[4780]: E0219 08:42:25.330623 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-log" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.330637 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-log" Feb 19 08:42:25 crc kubenswrapper[4780]: E0219 08:42:25.330664 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-api" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.330670 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-api" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.330866 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-log" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.330882 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" containerName="nova-api-api" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.331840 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.336324 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.336513 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.382478 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.487710 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.487817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.487893 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.487908 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488058 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488083 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488157 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qnjw\" (UniqueName: \"kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw\") pod \"eea93a11-04b6-4394-9f72-576540d36f5d\" (UID: \"eea93a11-04b6-4394-9f72-576540d36f5d\") " Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488356 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.488888 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.489013 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-668tp\" (UniqueName: \"kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.489104 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.489219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.489289 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.489308 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eea93a11-04b6-4394-9f72-576540d36f5d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.491050 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw" (OuterVolumeSpecName: "kube-api-access-7qnjw") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "kube-api-access-7qnjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.492393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts" (OuterVolumeSpecName: "scripts") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.514948 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.582661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data" (OuterVolumeSpecName: "config-data") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.583285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea93a11-04b6-4394-9f72-576540d36f5d" (UID: "eea93a11-04b6-4394-9f72-576540d36f5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.590680 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.590855 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-668tp\" (UniqueName: \"kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.590925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591003 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591330 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qnjw\" (UniqueName: \"kubernetes.io/projected/eea93a11-04b6-4394-9f72-576540d36f5d-kube-api-access-7qnjw\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591424 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591423 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591441 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591512 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.591542 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea93a11-04b6-4394-9f72-576540d36f5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.597332 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.606706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.611486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-668tp\" (UniqueName: \"kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp\") pod \"nova-api-0\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.674266 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:25 crc kubenswrapper[4780]: I0219 08:42:25.949471 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd54f39-dfc3-4015-809e-814ff2c9782b" path="/var/lib/kubelet/pods/9bd54f39-dfc3-4015-809e-814ff2c9782b/volumes" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.170092 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.220424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerStarted","Data":"5acd499476dd9cbc6bf55118ef961df48cbcdefd66c35eae0517d4e7851a577e"} Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.225157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eea93a11-04b6-4394-9f72-576540d36f5d","Type":"ContainerDied","Data":"c6d61d0c2f0c8ac956fdbc623294926cc69eebbb0c75d5282f7c39d154abce44"} Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.225267 4780 scope.go:117] "RemoveContainer" containerID="1c5c0ed331390b6094b925152821cd290354a6796a55cd56bab6e1424fa74311" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.225430 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.256547 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.269114 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278175 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:26 crc kubenswrapper[4780]: E0219 08:42:26.278544 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-notification-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278562 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-notification-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: E0219 08:42:26.278571 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-central-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278578 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-central-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: E0219 08:42:26.278592 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="proxy-httpd" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278611 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="proxy-httpd" Feb 19 08:42:26 crc kubenswrapper[4780]: E0219 08:42:26.278632 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="sg-core" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278638 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="sg-core" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278788 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="proxy-httpd" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278802 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-notification-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278814 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="ceilometer-central-agent" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.278826 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" containerName="sg-core" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.280352 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.282067 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.282707 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.282894 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.290458 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.305276 4780 scope.go:117] "RemoveContainer" containerID="bc5ef7ef4bd750a4764ce70848d7c646155afde6f4f12a3f795f04acd33850e5" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.364572 4780 scope.go:117] "RemoveContainer" containerID="488d4e0a6038546d21ae96cb49f871e3d307c55c4627631365523f8c0e3d5704" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.394529 4780 scope.go:117] "RemoveContainer" containerID="7fd9123c3c93cf97de952f4968c96a4b69a44f2e5159ea4589984dd56c117f1e" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406001 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406166 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406247 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406277 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406326 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.406351 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2ng\" (UniqueName: \"kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.507827 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.507882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2ng\" (UniqueName: \"kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.507901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.507968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.508283 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.508310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.508604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.508376 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.508721 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.509210 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.514731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.514718 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.515446 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.515553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.528574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2ng\" (UniqueName: \"kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.529322 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts\") pod \"ceilometer-0\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " pod="openstack/ceilometer-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.583376 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.583432 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 08:42:26 crc kubenswrapper[4780]: I0219 08:42:26.613137 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:27 crc kubenswrapper[4780]: I0219 08:42:27.095321 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:27 crc kubenswrapper[4780]: W0219 08:42:27.101508 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3a7a88c_572e_4677_a8ce_9df09432b6d1.slice/crio-3523d64af7aef16916846547decdb2c83a8a921a27f27e164e3eeea03a181b81 WatchSource:0}: Error finding container 3523d64af7aef16916846547decdb2c83a8a921a27f27e164e3eeea03a181b81: Status 404 returned error can't find the container with id 3523d64af7aef16916846547decdb2c83a8a921a27f27e164e3eeea03a181b81 Feb 19 08:42:27 crc kubenswrapper[4780]: I0219 08:42:27.240990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerStarted","Data":"3523d64af7aef16916846547decdb2c83a8a921a27f27e164e3eeea03a181b81"} Feb 19 08:42:27 crc kubenswrapper[4780]: I0219 08:42:27.244874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerStarted","Data":"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be"} Feb 19 08:42:27 crc kubenswrapper[4780]: I0219 08:42:27.595937 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 08:42:27 crc kubenswrapper[4780]: I0219 08:42:27.960071 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea93a11-04b6-4394-9f72-576540d36f5d" path="/var/lib/kubelet/pods/eea93a11-04b6-4394-9f72-576540d36f5d/volumes" Feb 19 08:42:28 crc kubenswrapper[4780]: I0219 08:42:28.257807 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerStarted","Data":"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037"} Feb 19 08:42:28 crc kubenswrapper[4780]: I0219 08:42:28.279736 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.279714149 podStartE2EDuration="3.279714149s" podCreationTimestamp="2026-02-19 08:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:28.271785693 +0000 UTC m=+1291.015443152" watchObservedRunningTime="2026-02-19 08:42:28.279714149 +0000 UTC m=+1291.023371598" Feb 19 08:42:30 crc kubenswrapper[4780]: I0219 08:42:30.284061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerStarted","Data":"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245"} Feb 19 08:42:30 crc kubenswrapper[4780]: I0219 08:42:30.795675 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 08:42:31 crc kubenswrapper[4780]: I0219 08:42:31.294653 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerStarted","Data":"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489"} Feb 19 08:42:31 crc kubenswrapper[4780]: I0219 08:42:31.583258 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 08:42:31 crc kubenswrapper[4780]: I0219 08:42:31.583320 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 08:42:32 crc kubenswrapper[4780]: I0219 08:42:32.596012 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 08:42:32 crc kubenswrapper[4780]: I0219 08:42:32.596403 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:32 crc kubenswrapper[4780]: I0219 08:42:32.596444 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:32 crc kubenswrapper[4780]: I0219 08:42:32.641323 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 08:42:33 crc kubenswrapper[4780]: I0219 08:42:33.315771 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerStarted","Data":"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c"} Feb 19 08:42:33 crc kubenswrapper[4780]: I0219 08:42:33.353008 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 08:42:35 crc kubenswrapper[4780]: I0219 08:42:35.675572 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:42:35 crc kubenswrapper[4780]: I0219 08:42:35.676189 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.335802 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.336141 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.360726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerStarted","Data":"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1"} Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.361038 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.385933 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.164763707 podStartE2EDuration="10.38591607s" podCreationTimestamp="2026-02-19 08:42:26 +0000 UTC" firstStartedPulling="2026-02-19 08:42:27.103484722 +0000 UTC m=+1289.847142191" lastFinishedPulling="2026-02-19 08:42:35.324637085 +0000 UTC m=+1298.068294554" observedRunningTime="2026-02-19 08:42:36.384078135 +0000 UTC m=+1299.127735604" watchObservedRunningTime="2026-02-19 08:42:36.38591607 +0000 UTC m=+1299.129573529" Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.757382 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:36 crc kubenswrapper[4780]: I0219 08:42:36.757382 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:42:41 crc kubenswrapper[4780]: I0219 08:42:41.592218 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 08:42:41 crc kubenswrapper[4780]: I0219 08:42:41.596194 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 08:42:41 crc kubenswrapper[4780]: I0219 08:42:41.599963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 08:42:42 crc kubenswrapper[4780]: I0219 08:42:42.426716 4780 generic.go:334] "Generic (PLEG): container finished" podID="c24ac63b-902e-402b-a8bb-5468f6ccad62" containerID="1de7dd1d38ac3ce7143b43768b32bfccd71711cf336d1f62a1ca7c2a1768a1e9" exitCode=137 Feb 19 08:42:42 crc kubenswrapper[4780]: I0219 08:42:42.426822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c24ac63b-902e-402b-a8bb-5468f6ccad62","Type":"ContainerDied","Data":"1de7dd1d38ac3ce7143b43768b32bfccd71711cf336d1f62a1ca7c2a1768a1e9"} Feb 19 08:42:42 crc kubenswrapper[4780]: I0219 08:42:42.435280 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 08:42:42 crc kubenswrapper[4780]: I0219 08:42:42.915973 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.030736 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x6f5\" (UniqueName: \"kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5\") pod \"c24ac63b-902e-402b-a8bb-5468f6ccad62\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.030849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle\") pod \"c24ac63b-902e-402b-a8bb-5468f6ccad62\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.031092 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data\") pod \"c24ac63b-902e-402b-a8bb-5468f6ccad62\" (UID: \"c24ac63b-902e-402b-a8bb-5468f6ccad62\") " Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.036439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5" (OuterVolumeSpecName: "kube-api-access-9x6f5") pod "c24ac63b-902e-402b-a8bb-5468f6ccad62" (UID: "c24ac63b-902e-402b-a8bb-5468f6ccad62"). InnerVolumeSpecName "kube-api-access-9x6f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.062153 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data" (OuterVolumeSpecName: "config-data") pod "c24ac63b-902e-402b-a8bb-5468f6ccad62" (UID: "c24ac63b-902e-402b-a8bb-5468f6ccad62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.063360 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c24ac63b-902e-402b-a8bb-5468f6ccad62" (UID: "c24ac63b-902e-402b-a8bb-5468f6ccad62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.133673 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x6f5\" (UniqueName: \"kubernetes.io/projected/c24ac63b-902e-402b-a8bb-5468f6ccad62-kube-api-access-9x6f5\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.133741 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.133774 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24ac63b-902e-402b-a8bb-5468f6ccad62-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.441634 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c24ac63b-902e-402b-a8bb-5468f6ccad62","Type":"ContainerDied","Data":"a12b6a444fc9127bbe6869ad95c2baba57a0195a7f813db9373f5a9ee3fe8e3f"} Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.443007 4780 scope.go:117] "RemoveContainer" containerID="1de7dd1d38ac3ce7143b43768b32bfccd71711cf336d1f62a1ca7c2a1768a1e9" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.441673 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.492478 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.500737 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.522545 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:43 crc kubenswrapper[4780]: E0219 08:42:43.523165 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24ac63b-902e-402b-a8bb-5468f6ccad62" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.523194 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24ac63b-902e-402b-a8bb-5468f6ccad62" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.523520 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24ac63b-902e-402b-a8bb-5468f6ccad62" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.524489 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.527346 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.528024 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.529183 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.537893 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.562917 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.562977 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.563036 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.563080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.563231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8g6j\" (UniqueName: \"kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.665108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.665308 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.665483 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.665627 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8g6j\" (UniqueName: \"kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.665777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.675668 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.676020 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.677384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.680576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.694662 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8g6j\" (UniqueName: \"kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.853845 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:43 crc kubenswrapper[4780]: I0219 08:42:43.954178 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24ac63b-902e-402b-a8bb-5468f6ccad62" path="/var/lib/kubelet/pods/c24ac63b-902e-402b-a8bb-5468f6ccad62/volumes" Feb 19 08:42:44 crc kubenswrapper[4780]: I0219 08:42:44.337710 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:42:44 crc kubenswrapper[4780]: I0219 08:42:44.457661 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7fa9686-243a-4fbe-ba17-93f9e4aa822c","Type":"ContainerStarted","Data":"b8fc3500c3da5d1d26035f618ff78cc59de71ab9d09ddfc7e46d40c5450baaeb"} Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.473214 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7fa9686-243a-4fbe-ba17-93f9e4aa822c","Type":"ContainerStarted","Data":"8054af60ddd0b374a2c0f62a832f4a309e45631fbcb23191918b751a178136ea"} Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.505681 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.505654878 podStartE2EDuration="2.505654878s" podCreationTimestamp="2026-02-19 08:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:45.492259568 +0000 UTC m=+1308.235917027" watchObservedRunningTime="2026-02-19 08:42:45.505654878 +0000 UTC m=+1308.249312367" Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.679008 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.679840 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.680435 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 08:42:45 crc kubenswrapper[4780]: I0219 08:42:45.683974 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.486777 4780 generic.go:334] "Generic (PLEG): container finished" podID="0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" containerID="c1e91a87f73224be9b1e1c661e6be1cc05ace2a3bc8a0c6cc1bf125f0b7a0238" exitCode=0 Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.487697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" event={"ID":"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a","Type":"ContainerDied","Data":"c1e91a87f73224be9b1e1c661e6be1cc05ace2a3bc8a0c6cc1bf125f0b7a0238"} Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.488721 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.493296 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.690802 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.692660 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.703973 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727684 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqf5\" (UniqueName: \"kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727756 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.727837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.828845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.828902 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqf5\" (UniqueName: \"kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.828926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.828957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.828979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.829019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.829779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.829872 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.830363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.830560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.831189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:46 crc kubenswrapper[4780]: I0219 08:42:46.847805 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqf5\" (UniqueName: \"kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5\") pod \"dnsmasq-dns-7677694455-29vx8\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.010763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.550413 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:42:47 crc kubenswrapper[4780]: W0219 08:42:47.551985 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0d666c4_abfe_4b46_90db_1fd272d8adb4.slice/crio-3026de95c972f644dcc6975e685130bb67a66ef8e249d17cd63109465aaa4192 WatchSource:0}: Error finding container 3026de95c972f644dcc6975e685130bb67a66ef8e249d17cd63109465aaa4192: Status 404 returned error can't find the container with id 3026de95c972f644dcc6975e685130bb67a66ef8e249d17cd63109465aaa4192 Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.773079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.948847 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle\") pod \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.949310 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts\") pod \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.949374 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkmkt\" (UniqueName: \"kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt\") pod \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.949465 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data\") pod \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\" (UID: \"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a\") " Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.953491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts" (OuterVolumeSpecName: "scripts") pod "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" (UID: "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.959443 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt" (OuterVolumeSpecName: "kube-api-access-kkmkt") pod "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" (UID: "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a"). InnerVolumeSpecName "kube-api-access-kkmkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:47 crc kubenswrapper[4780]: I0219 08:42:47.991739 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data" (OuterVolumeSpecName: "config-data") pod "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" (UID: "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.017750 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" (UID: "0d5b732f-e5c7-4bec-8c32-4d16e07ce21a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.052082 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.052139 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.052153 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkmkt\" (UniqueName: \"kubernetes.io/projected/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-kube-api-access-kkmkt\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.052166 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.504666 4780 generic.go:334] "Generic (PLEG): container finished" podID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerID="ffd0ca69537eaf03318f20e47fc4b933b926a18775547b75f0af4f02aadae087" exitCode=0 Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.504757 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-29vx8" event={"ID":"c0d666c4-abfe-4b46-90db-1fd272d8adb4","Type":"ContainerDied","Data":"ffd0ca69537eaf03318f20e47fc4b933b926a18775547b75f0af4f02aadae087"} Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.510484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-29vx8" event={"ID":"c0d666c4-abfe-4b46-90db-1fd272d8adb4","Type":"ContainerStarted","Data":"3026de95c972f644dcc6975e685130bb67a66ef8e249d17cd63109465aaa4192"} Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.518761 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" event={"ID":"0d5b732f-e5c7-4bec-8c32-4d16e07ce21a","Type":"ContainerDied","Data":"ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5"} Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.518884 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa835bffe8d2ce53492183d8b453188676a06c00f5215f5088447da51a119b5" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.513354 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bv9tc" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.620606 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:42:48 crc kubenswrapper[4780]: E0219 08:42:48.621392 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" containerName="nova-cell1-conductor-db-sync" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.621412 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" containerName="nova-cell1-conductor-db-sync" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.622491 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" containerName="nova-cell1-conductor-db-sync" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.623589 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.629541 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.648223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.766222 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwpv\" (UniqueName: \"kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.766495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.766636 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.854440 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.868688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.868757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.868824 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwpv\" (UniqueName: \"kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.872887 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.873525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.888502 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwpv\" (UniqueName: \"kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv\") pod \"nova-cell1-conductor-0\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.890761 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.891113 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-central-agent" containerID="cri-o://61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245" gracePeriod=30 Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.891723 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="proxy-httpd" containerID="cri-o://bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1" gracePeriod=30 Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.891806 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="sg-core" containerID="cri-o://3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c" gracePeriod=30 Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.891860 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-notification-agent" containerID="cri-o://d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489" gracePeriod=30 Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.907686 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.191:3000/\": EOF" Feb 19 08:42:48 crc kubenswrapper[4780]: I0219 08:42:48.988933 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.263167 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:49 crc kubenswrapper[4780]: W0219 08:42:49.483807 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44b6c27_15b7_4e04_ac73_742091b1b33d.slice/crio-a520e993f2eb26379dc054ce24ff36a6d00c7dca79a3fe0ba2dd46a863586153 WatchSource:0}: Error finding container a520e993f2eb26379dc054ce24ff36a6d00c7dca79a3fe0ba2dd46a863586153: Status 404 returned error can't find the container with id a520e993f2eb26379dc054ce24ff36a6d00c7dca79a3fe0ba2dd46a863586153 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.486569 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537500 4780 generic.go:334] "Generic (PLEG): container finished" podID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerID="bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1" exitCode=0 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537534 4780 generic.go:334] "Generic (PLEG): container finished" podID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerID="3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c" exitCode=2 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537543 4780 generic.go:334] "Generic (PLEG): container finished" podID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerID="61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245" exitCode=0 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerDied","Data":"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1"} Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537609 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerDied","Data":"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c"} Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.537622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerDied","Data":"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245"} Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.539993 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d44b6c27-15b7-4e04-ac73-742091b1b33d","Type":"ContainerStarted","Data":"a520e993f2eb26379dc054ce24ff36a6d00c7dca79a3fe0ba2dd46a863586153"} Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.542330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-29vx8" event={"ID":"c0d666c4-abfe-4b46-90db-1fd272d8adb4","Type":"ContainerStarted","Data":"a26377ef0467876a0eb785288d7dc23c271d5abc978edde8cc042649188bc179"} Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.542498 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-log" containerID="cri-o://7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be" gracePeriod=30 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.542959 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-api" containerID="cri-o://7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037" gracePeriod=30 Feb 19 08:42:49 crc kubenswrapper[4780]: I0219 08:42:49.567712 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7677694455-29vx8" podStartSLOduration=3.567689294 podStartE2EDuration="3.567689294s" podCreationTimestamp="2026-02-19 08:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:49.564350401 +0000 UTC m=+1312.308007850" watchObservedRunningTime="2026-02-19 08:42:49.567689294 +0000 UTC m=+1312.311346753" Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.552161 4780 generic.go:334] "Generic (PLEG): container finished" podID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerID="7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be" exitCode=143 Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.552459 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerDied","Data":"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be"} Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.554904 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d44b6c27-15b7-4e04-ac73-742091b1b33d","Type":"ContainerStarted","Data":"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d"} Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.554931 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.554952 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:50 crc kubenswrapper[4780]: I0219 08:42:50.577381 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5773484250000003 podStartE2EDuration="2.577348425s" podCreationTimestamp="2026-02-19 08:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:50.569193014 +0000 UTC m=+1313.312850483" watchObservedRunningTime="2026-02-19 08:42:50.577348425 +0000 UTC m=+1313.321005874" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.013949 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.044874 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045072 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2ng\" (UniqueName: \"kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045297 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045459 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045524 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.045544 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs\") pod \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\" (UID: \"a3a7a88c-572e-4677-a8ce-9df09432b6d1\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.047910 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.048283 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.050025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts" (OuterVolumeSpecName: "scripts") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.053419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng" (OuterVolumeSpecName: "kube-api-access-zl2ng") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "kube-api-access-zl2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.076290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.099930 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.104158 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.119361 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.146925 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle\") pod \"c4556e18-aa66-4e5b-a82f-141e6019d67d\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.146967 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs\") pod \"c4556e18-aa66-4e5b-a82f-141e6019d67d\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147021 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-668tp\" (UniqueName: \"kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp\") pod \"c4556e18-aa66-4e5b-a82f-141e6019d67d\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data\") pod \"c4556e18-aa66-4e5b-a82f-141e6019d67d\" (UID: \"c4556e18-aa66-4e5b-a82f-141e6019d67d\") " Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147428 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147446 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2ng\" (UniqueName: \"kubernetes.io/projected/a3a7a88c-572e-4677-a8ce-9df09432b6d1-kube-api-access-zl2ng\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147457 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147465 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a7a88c-572e-4677-a8ce-9df09432b6d1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147494 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147505 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147513 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.147882 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs" (OuterVolumeSpecName: "logs") pod "c4556e18-aa66-4e5b-a82f-141e6019d67d" (UID: "c4556e18-aa66-4e5b-a82f-141e6019d67d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.152051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp" (OuterVolumeSpecName: "kube-api-access-668tp") pod "c4556e18-aa66-4e5b-a82f-141e6019d67d" (UID: "c4556e18-aa66-4e5b-a82f-141e6019d67d"). InnerVolumeSpecName "kube-api-access-668tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.174944 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data" (OuterVolumeSpecName: "config-data") pod "a3a7a88c-572e-4677-a8ce-9df09432b6d1" (UID: "a3a7a88c-572e-4677-a8ce-9df09432b6d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.183386 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4556e18-aa66-4e5b-a82f-141e6019d67d" (UID: "c4556e18-aa66-4e5b-a82f-141e6019d67d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.199361 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data" (OuterVolumeSpecName: "config-data") pod "c4556e18-aa66-4e5b-a82f-141e6019d67d" (UID: "c4556e18-aa66-4e5b-a82f-141e6019d67d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.249302 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.249338 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4556e18-aa66-4e5b-a82f-141e6019d67d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.249348 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-668tp\" (UniqueName: \"kubernetes.io/projected/c4556e18-aa66-4e5b-a82f-141e6019d67d-kube-api-access-668tp\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.249361 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4556e18-aa66-4e5b-a82f-141e6019d67d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.249369 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a7a88c-572e-4677-a8ce-9df09432b6d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.580980 4780 generic.go:334] "Generic (PLEG): container finished" podID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerID="7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037" exitCode=0 Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.581009 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.581026 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerDied","Data":"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037"} Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.581603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4556e18-aa66-4e5b-a82f-141e6019d67d","Type":"ContainerDied","Data":"5acd499476dd9cbc6bf55118ef961df48cbcdefd66c35eae0517d4e7851a577e"} Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.581625 4780 scope.go:117] "RemoveContainer" containerID="7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.586215 4780 generic.go:334] "Generic (PLEG): container finished" podID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerID="d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489" exitCode=0 Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.586259 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerDied","Data":"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489"} Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.586319 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a7a88c-572e-4677-a8ce-9df09432b6d1","Type":"ContainerDied","Data":"3523d64af7aef16916846547decdb2c83a8a921a27f27e164e3eeea03a181b81"} Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.586389 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.607457 4780 scope.go:117] "RemoveContainer" containerID="7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.624841 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.635234 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.646380 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.660194 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.671758 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.672365 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="proxy-httpd" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.672432 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="proxy-httpd" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.672522 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="sg-core" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.672581 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="sg-core" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.672644 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-notification-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.672693 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-notification-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.672752 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-api" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.672807 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-api" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.672998 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-log" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673063 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-log" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.673116 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-central-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673188 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-central-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673424 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-api" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673526 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="sg-core" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673589 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-central-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673643 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" containerName="nova-api-log" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673705 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="proxy-httpd" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.673760 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" containerName="ceilometer-notification-agent" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.675536 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.677909 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.678273 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.678389 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.687280 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.689039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.694184 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.694519 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.694995 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.698526 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.715249 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.718408 4780 scope.go:117] "RemoveContainer" containerID="7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.720295 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037\": container with ID starting with 7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037 not found: ID does not exist" containerID="7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.720415 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037"} err="failed to get container status \"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037\": rpc error: code = NotFound desc = could not find container \"7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037\": container with ID starting with 7a56dd998626ab47d4533d9a2da760fc82aae78066bb7fd693078aac419c4037 not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.720492 4780 scope.go:117] "RemoveContainer" containerID="7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.720760 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be\": container with ID starting with 7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be not found: ID does not exist" containerID="7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.720852 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be"} err="failed to get container status \"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be\": rpc error: code = NotFound desc = could not find container \"7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be\": container with ID starting with 7bcff94b4b6ad8d367c141b8742b907cdc52fa5f33d23898f181b7f61a25a8be not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.720939 4780 scope.go:117] "RemoveContainer" containerID="bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.737725 4780 scope.go:117] "RemoveContainer" containerID="3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.756608 4780 scope.go:117] "RemoveContainer" containerID="d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghrf\" (UniqueName: \"kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758205 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758235 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758335 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758560 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758639 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758680 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w5t\" (UniqueName: \"kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.758740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.778925 4780 scope.go:117] "RemoveContainer" containerID="61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.802401 4780 scope.go:117] "RemoveContainer" containerID="bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.802879 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1\": container with ID starting with bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1 not found: ID does not exist" containerID="bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.802909 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1"} err="failed to get container status \"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1\": rpc error: code = NotFound desc = could not find container \"bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1\": container with ID starting with bba77eb39393ea669e4b634dd3c1c628dfaea4507bc7966052e1f2e2f20ab5b1 not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.802931 4780 scope.go:117] "RemoveContainer" containerID="3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.803357 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c\": container with ID starting with 3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c not found: ID does not exist" containerID="3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.803490 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c"} err="failed to get container status \"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c\": rpc error: code = NotFound desc = could not find container \"3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c\": container with ID starting with 3f96a66c33b6201664382a9f27036de2abee851564e9ed0e3c4ffa614d5af47c not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.803617 4780 scope.go:117] "RemoveContainer" containerID="d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.804486 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489\": container with ID starting with d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489 not found: ID does not exist" containerID="d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.804530 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489"} err="failed to get container status \"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489\": rpc error: code = NotFound desc = could not find container \"d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489\": container with ID starting with d2d8bd8c79ad076fe36097dec3e641263e04e77869829c0b202892a280908489 not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.804559 4780 scope.go:117] "RemoveContainer" containerID="61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245" Feb 19 08:42:53 crc kubenswrapper[4780]: E0219 08:42:53.804864 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245\": container with ID starting with 61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245 not found: ID does not exist" containerID="61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.804887 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245"} err="failed to get container status \"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245\": rpc error: code = NotFound desc = could not find container \"61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245\": container with ID starting with 61f09069c8909c3dac2d83a8d1f920f40ce70f96aedd1a487dd84f0b045bc245 not found: ID does not exist" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.854360 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861005 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghrf\" (UniqueName: \"kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861252 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861367 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861420 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.861758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862352 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862420 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5w5t\" (UniqueName: \"kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862657 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.862957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.865915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.866640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.866673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.867606 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.867733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.868396 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.870440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.872349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.875136 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.877535 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5w5t\" (UniqueName: \"kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t\") pod \"nova-api-0\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " pod="openstack/nova-api-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.882808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghrf\" (UniqueName: \"kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf\") pod \"ceilometer-0\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " pod="openstack/ceilometer-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.885466 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:53 crc kubenswrapper[4780]: I0219 08:42:53.968006 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a7a88c-572e-4677-a8ce-9df09432b6d1" path="/var/lib/kubelet/pods/a3a7a88c-572e-4677-a8ce-9df09432b6d1/volumes" Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.026137 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.030168 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.062777 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4556e18-aa66-4e5b-a82f-141e6019d67d" path="/var/lib/kubelet/pods/c4556e18-aa66-4e5b-a82f-141e6019d67d/volumes" Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.568090 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:42:54 crc kubenswrapper[4780]: W0219 08:42:54.581519 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6bee84d_2233_4962_94e0_bfe3c8f26496.slice/crio-918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f WatchSource:0}: Error finding container 918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f: Status 404 returned error can't find the container with id 918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.600277 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerStarted","Data":"918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f"} Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.601691 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerStarted","Data":"defb7dc6edfaab4b720cdba08ebabff5a17dc017d949416b81b177034ca7bad4"} Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.604223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:42:54 crc kubenswrapper[4780]: I0219 08:42:54.621074 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:42:55 crc kubenswrapper[4780]: I0219 08:42:55.618608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerStarted","Data":"1cfc9bf4b6c77d959a0f79e0b6127d18398787d6583e7ce82131bd062a4da946"} Feb 19 08:42:55 crc kubenswrapper[4780]: I0219 08:42:55.624503 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerStarted","Data":"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b"} Feb 19 08:42:55 crc kubenswrapper[4780]: I0219 08:42:55.624585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerStarted","Data":"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d"} Feb 19 08:42:55 crc kubenswrapper[4780]: I0219 08:42:55.648988 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.648962961 podStartE2EDuration="2.648962961s" podCreationTimestamp="2026-02-19 08:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:42:55.645538237 +0000 UTC m=+1318.389195686" watchObservedRunningTime="2026-02-19 08:42:55.648962961 +0000 UTC m=+1318.392620410" Feb 19 08:42:56 crc kubenswrapper[4780]: I0219 08:42:56.653623 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerStarted","Data":"b1f8f92c605a74c8e4de483c71a576834ddf5781b144587755d1b657923d5477"} Feb 19 08:42:56 crc kubenswrapper[4780]: I0219 08:42:56.654084 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerStarted","Data":"b3f39502442fe07eed7a1a803c209b72c96771a7fcc5a2b9991e435b889f53cf"} Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.012410 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.087316 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.087586 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="dnsmasq-dns" containerID="cri-o://b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd" gracePeriod=10 Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.595029 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.679184 4780 generic.go:334] "Generic (PLEG): container finished" podID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerID="b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd" exitCode=0 Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.679233 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.679254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" event={"ID":"bd2019cd-bf0c-411f-855c-9f93dcd39d26","Type":"ContainerDied","Data":"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd"} Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.679331 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-pb2s2" event={"ID":"bd2019cd-bf0c-411f-855c-9f93dcd39d26","Type":"ContainerDied","Data":"5f5a69c3d29cc5edf959c237c9d4caa7c0157c6d19bb6792a057c2a34e3d45e2"} Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.679357 4780 scope.go:117] "RemoveContainer" containerID="b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738449 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxzbb\" (UniqueName: \"kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738616 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738632 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.738694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb\") pod \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\" (UID: \"bd2019cd-bf0c-411f-855c-9f93dcd39d26\") " Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.744946 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb" (OuterVolumeSpecName: "kube-api-access-qxzbb") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "kube-api-access-qxzbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.788574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.797505 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.804182 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.813368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.815388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config" (OuterVolumeSpecName: "config") pod "bd2019cd-bf0c-411f-855c-9f93dcd39d26" (UID: "bd2019cd-bf0c-411f-855c-9f93dcd39d26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.840500 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.840747 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.840852 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.840933 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.841015 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2019cd-bf0c-411f-855c-9f93dcd39d26-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.841082 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxzbb\" (UniqueName: \"kubernetes.io/projected/bd2019cd-bf0c-411f-855c-9f93dcd39d26-kube-api-access-qxzbb\") on node \"crc\" DevicePath \"\"" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.882706 4780 scope.go:117] "RemoveContainer" containerID="1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.904395 4780 scope.go:117] "RemoveContainer" containerID="b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd" Feb 19 08:42:57 crc kubenswrapper[4780]: E0219 08:42:57.904788 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd\": container with ID starting with b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd not found: ID does not exist" containerID="b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.904891 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd"} err="failed to get container status \"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd\": rpc error: code = NotFound desc = could not find container \"b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd\": container with ID starting with b4d8e5e518c7d02bd40cb3cdfb3223dd3f1979c0be82ab1ec9a2d6c8a9881fdd not found: ID does not exist" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.904991 4780 scope.go:117] "RemoveContainer" containerID="1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3" Feb 19 08:42:57 crc kubenswrapper[4780]: E0219 08:42:57.905416 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3\": container with ID starting with 1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3 not found: ID does not exist" containerID="1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3" Feb 19 08:42:57 crc kubenswrapper[4780]: I0219 08:42:57.905437 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3"} err="failed to get container status \"1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3\": rpc error: code = NotFound desc = could not find container \"1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3\": container with ID starting with 1bbce264f873620d1ed197fde79588501aac94d6fa57744593559205d703c0c3 not found: ID does not exist" Feb 19 08:42:58 crc kubenswrapper[4780]: I0219 08:42:58.052967 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:58 crc kubenswrapper[4780]: I0219 08:42:58.063024 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-pb2s2"] Feb 19 08:42:58 crc kubenswrapper[4780]: I0219 08:42:58.689836 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerStarted","Data":"11546e606f0ed19c34b297d01479e536a89c87d04b0b835ed462a9e04f3f7c79"} Feb 19 08:42:58 crc kubenswrapper[4780]: I0219 08:42:58.690004 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 08:42:58 crc kubenswrapper[4780]: I0219 08:42:58.715446 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.628753593 podStartE2EDuration="5.715425632s" podCreationTimestamp="2026-02-19 08:42:53 +0000 UTC" firstStartedPulling="2026-02-19 08:42:54.584135928 +0000 UTC m=+1317.327793377" lastFinishedPulling="2026-02-19 08:42:57.670807967 +0000 UTC m=+1320.414465416" observedRunningTime="2026-02-19 08:42:58.713250218 +0000 UTC m=+1321.456907697" watchObservedRunningTime="2026-02-19 08:42:58.715425632 +0000 UTC m=+1321.459083081" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.037709 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.626175 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-x87w6"] Feb 19 08:42:59 crc kubenswrapper[4780]: E0219 08:42:59.626651 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="init" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.626672 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="init" Feb 19 08:42:59 crc kubenswrapper[4780]: E0219 08:42:59.626686 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="dnsmasq-dns" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.626716 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="dnsmasq-dns" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.626946 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" containerName="dnsmasq-dns" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.627659 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.629491 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.630545 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.653022 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x87w6"] Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.789176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.789299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.789325 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zrm\" (UniqueName: \"kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.789371 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.891181 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.891244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zrm\" (UniqueName: \"kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.891304 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.891386 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.896764 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.897249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.922382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.922537 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zrm\" (UniqueName: \"kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm\") pod \"nova-cell1-cell-mapping-x87w6\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.953037 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2019cd-bf0c-411f-855c-9f93dcd39d26" path="/var/lib/kubelet/pods/bd2019cd-bf0c-411f-855c-9f93dcd39d26/volumes" Feb 19 08:42:59 crc kubenswrapper[4780]: I0219 08:42:59.953750 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:43:00 crc kubenswrapper[4780]: W0219 08:43:00.437080 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5dbddd7_97e0_495e_8cc0_326e18ea5843.slice/crio-a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9 WatchSource:0}: Error finding container a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9: Status 404 returned error can't find the container with id a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9 Feb 19 08:43:00 crc kubenswrapper[4780]: I0219 08:43:00.439718 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x87w6"] Feb 19 08:43:00 crc kubenswrapper[4780]: I0219 08:43:00.714987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x87w6" event={"ID":"b5dbddd7-97e0-495e-8cc0-326e18ea5843","Type":"ContainerStarted","Data":"3d8bad2c317a4a0b36b6a1817e4a990bff2ebf795779d2abfaa149778ea4cf26"} Feb 19 08:43:00 crc kubenswrapper[4780]: I0219 08:43:00.715599 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x87w6" event={"ID":"b5dbddd7-97e0-495e-8cc0-326e18ea5843","Type":"ContainerStarted","Data":"a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9"} Feb 19 08:43:00 crc kubenswrapper[4780]: I0219 08:43:00.735014 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-x87w6" podStartSLOduration=1.734994602 podStartE2EDuration="1.734994602s" podCreationTimestamp="2026-02-19 08:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:43:00.726646706 +0000 UTC m=+1323.470304155" watchObservedRunningTime="2026-02-19 08:43:00.734994602 +0000 UTC m=+1323.478652071" Feb 19 08:43:04 crc kubenswrapper[4780]: I0219 08:43:04.031152 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:43:04 crc kubenswrapper[4780]: I0219 08:43:04.031426 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:43:05 crc kubenswrapper[4780]: I0219 08:43:05.052373 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:05 crc kubenswrapper[4780]: I0219 08:43:05.052418 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:05 crc kubenswrapper[4780]: I0219 08:43:05.776411 4780 generic.go:334] "Generic (PLEG): container finished" podID="b5dbddd7-97e0-495e-8cc0-326e18ea5843" containerID="3d8bad2c317a4a0b36b6a1817e4a990bff2ebf795779d2abfaa149778ea4cf26" exitCode=0 Feb 19 08:43:05 crc kubenswrapper[4780]: I0219 08:43:05.776505 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x87w6" event={"ID":"b5dbddd7-97e0-495e-8cc0-326e18ea5843","Type":"ContainerDied","Data":"3d8bad2c317a4a0b36b6a1817e4a990bff2ebf795779d2abfaa149778ea4cf26"} Feb 19 08:43:06 crc kubenswrapper[4780]: I0219 08:43:06.336635 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:43:06 crc kubenswrapper[4780]: I0219 08:43:06.337410 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.267876 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.364726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts\") pod \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.364914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data\") pod \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.365001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle\") pod \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.365077 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zrm\" (UniqueName: \"kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm\") pod \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\" (UID: \"b5dbddd7-97e0-495e-8cc0-326e18ea5843\") " Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.370419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts" (OuterVolumeSpecName: "scripts") pod "b5dbddd7-97e0-495e-8cc0-326e18ea5843" (UID: "b5dbddd7-97e0-495e-8cc0-326e18ea5843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.372508 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm" (OuterVolumeSpecName: "kube-api-access-l9zrm") pod "b5dbddd7-97e0-495e-8cc0-326e18ea5843" (UID: "b5dbddd7-97e0-495e-8cc0-326e18ea5843"). InnerVolumeSpecName "kube-api-access-l9zrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.395264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data" (OuterVolumeSpecName: "config-data") pod "b5dbddd7-97e0-495e-8cc0-326e18ea5843" (UID: "b5dbddd7-97e0-495e-8cc0-326e18ea5843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.399573 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5dbddd7-97e0-495e-8cc0-326e18ea5843" (UID: "b5dbddd7-97e0-495e-8cc0-326e18ea5843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.467242 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.467611 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.467625 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zrm\" (UniqueName: \"kubernetes.io/projected/b5dbddd7-97e0-495e-8cc0-326e18ea5843-kube-api-access-l9zrm\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.467636 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dbddd7-97e0-495e-8cc0-326e18ea5843-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.800237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x87w6" event={"ID":"b5dbddd7-97e0-495e-8cc0-326e18ea5843","Type":"ContainerDied","Data":"a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9"} Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.800306 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a6f3c7407c016834bb396f7d3de3fbb41c7d4191be02e729f393b793de08f9" Feb 19 08:43:07 crc kubenswrapper[4780]: I0219 08:43:07.800342 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x87w6" Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.020058 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.020398 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-log" containerID="cri-o://67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d" gracePeriod=30 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.020673 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-api" containerID="cri-o://81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b" gracePeriod=30 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.031660 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.031957 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dca213b7-a7fc-4a51-816e-69bb4b586520" containerName="nova-scheduler-scheduler" containerID="cri-o://b6f26a651ab0f1793c7bf0c31982d1f809f1110c20c3b9d37e0f0efbe9c0ebe7" gracePeriod=30 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.105268 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.105471 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" containerID="cri-o://fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba" gracePeriod=30 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.105947 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" containerID="cri-o://5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c" gracePeriod=30 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.815736 4780 generic.go:334] "Generic (PLEG): container finished" podID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerID="67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d" exitCode=143 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.816071 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerDied","Data":"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d"} Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.817321 4780 generic.go:334] "Generic (PLEG): container finished" podID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerID="fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba" exitCode=143 Feb 19 08:43:08 crc kubenswrapper[4780]: I0219 08:43:08.817344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerDied","Data":"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.646669 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.764823 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.764899 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.765011 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.765040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.765106 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.765209 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5w5t\" (UniqueName: \"kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t\") pod \"848bb27e-1503-47d8-b316-a5e70eac4b0e\" (UID: \"848bb27e-1503-47d8-b316-a5e70eac4b0e\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.766947 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs" (OuterVolumeSpecName: "logs") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.771421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t" (OuterVolumeSpecName: "kube-api-access-n5w5t") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "kube-api-access-n5w5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.798040 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data" (OuterVolumeSpecName: "config-data") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.813655 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.816363 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.824791 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.833264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "848bb27e-1503-47d8-b316-a5e70eac4b0e" (UID: "848bb27e-1503-47d8-b316-a5e70eac4b0e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.856582 4780 generic.go:334] "Generic (PLEG): container finished" podID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerID="5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c" exitCode=0 Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.856646 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerDied","Data":"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.856675 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80d54d71-4d2f-44ec-bf81-53a184bdb557","Type":"ContainerDied","Data":"b00f3a71f246d59d172145ffb5402dd2ae4352b6237288af2fe962d46ad6f0ac"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.856695 4780 scope.go:117] "RemoveContainer" containerID="5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.856792 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.860849 4780 generic.go:334] "Generic (PLEG): container finished" podID="dca213b7-a7fc-4a51-816e-69bb4b586520" containerID="b6f26a651ab0f1793c7bf0c31982d1f809f1110c20c3b9d37e0f0efbe9c0ebe7" exitCode=0 Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.860908 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dca213b7-a7fc-4a51-816e-69bb4b586520","Type":"ContainerDied","Data":"b6f26a651ab0f1793c7bf0c31982d1f809f1110c20c3b9d37e0f0efbe9c0ebe7"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.862889 4780 generic.go:334] "Generic (PLEG): container finished" podID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerID="81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b" exitCode=0 Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.862935 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.862997 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerDied","Data":"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.863113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"848bb27e-1503-47d8-b316-a5e70eac4b0e","Type":"ContainerDied","Data":"defb7dc6edfaab4b720cdba08ebabff5a17dc017d949416b81b177034ca7bad4"} Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.866918 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.866968 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.866978 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.866987 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848bb27e-1503-47d8-b316-a5e70eac4b0e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.866995 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5w5t\" (UniqueName: \"kubernetes.io/projected/848bb27e-1503-47d8-b316-a5e70eac4b0e-kube-api-access-n5w5t\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.867005 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848bb27e-1503-47d8-b316-a5e70eac4b0e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.889631 4780 scope.go:117] "RemoveContainer" containerID="fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.904396 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.918262 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.923094 4780 scope.go:117] "RemoveContainer" containerID="5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.924687 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c\": container with ID starting with 5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c not found: ID does not exist" containerID="5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.924740 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c"} err="failed to get container status \"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c\": rpc error: code = NotFound desc = could not find container \"5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c\": container with ID starting with 5684f802586d7a479ed9e3cc8811d1fb9c2c518e7dfa2a2fa34fcb4342f6290c not found: ID does not exist" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.924770 4780 scope.go:117] "RemoveContainer" containerID="fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.925144 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba\": container with ID starting with fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba not found: ID does not exist" containerID="fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.925211 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba"} err="failed to get container status \"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba\": rpc error: code = NotFound desc = could not find container \"fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba\": container with ID starting with fa0439ebdaf0daeeca3ff01e5686d77524e4dc4a26e5c756247bbdd5142c1dba not found: ID does not exist" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.925230 4780 scope.go:117] "RemoveContainer" containerID="81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.950172 4780 scope.go:117] "RemoveContainer" containerID="67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.964811 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" path="/var/lib/kubelet/pods/848bb27e-1503-47d8-b316-a5e70eac4b0e/volumes" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.968114 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data\") pod \"80d54d71-4d2f-44ec-bf81-53a184bdb557\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.968211 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wczqt\" (UniqueName: \"kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt\") pod \"80d54d71-4d2f-44ec-bf81-53a184bdb557\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.968332 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle\") pod \"80d54d71-4d2f-44ec-bf81-53a184bdb557\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.968389 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs\") pod \"80d54d71-4d2f-44ec-bf81-53a184bdb557\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.968518 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs\") pod \"80d54d71-4d2f-44ec-bf81-53a184bdb557\" (UID: \"80d54d71-4d2f-44ec-bf81-53a184bdb557\") " Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs" (OuterVolumeSpecName: "logs") pod "80d54d71-4d2f-44ec-bf81-53a184bdb557" (UID: "80d54d71-4d2f-44ec-bf81-53a184bdb557"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969342 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.969770 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-log" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969792 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-log" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.969812 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969821 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.969836 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dbddd7-97e0-495e-8cc0-326e18ea5843" containerName="nova-manage" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969844 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dbddd7-97e0-495e-8cc0-326e18ea5843" containerName="nova-manage" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.969855 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969863 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.969890 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-api" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.969898 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-api" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.970108 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-api" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.970146 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.970163 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dbddd7-97e0-495e-8cc0-326e18ea5843" containerName="nova-manage" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.970184 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="848bb27e-1503-47d8-b316-a5e70eac4b0e" containerName="nova-api-log" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.970205 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.971381 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.971533 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.977718 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt" (OuterVolumeSpecName: "kube-api-access-wczqt") pod "80d54d71-4d2f-44ec-bf81-53a184bdb557" (UID: "80d54d71-4d2f-44ec-bf81-53a184bdb557"). InnerVolumeSpecName "kube-api-access-wczqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.978347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.978693 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.978846 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.986926 4780 scope.go:117] "RemoveContainer" containerID="81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.988325 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b\": container with ID starting with 81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b not found: ID does not exist" containerID="81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.988375 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b"} err="failed to get container status \"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b\": rpc error: code = NotFound desc = could not find container \"81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b\": container with ID starting with 81725407a5be22dfc81ab13df4735c55931793ac41003f753c6795f495546c6b not found: ID does not exist" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.988406 4780 scope.go:117] "RemoveContainer" containerID="67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d" Feb 19 08:43:11 crc kubenswrapper[4780]: E0219 08:43:11.988730 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d\": container with ID starting with 67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d not found: ID does not exist" containerID="67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d" Feb 19 08:43:11 crc kubenswrapper[4780]: I0219 08:43:11.988772 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d"} err="failed to get container status \"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d\": rpc error: code = NotFound desc = could not find container \"67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d\": container with ID starting with 67eac1c5d1cd7832bc965a62fe54cfdf2cd27947d24caa2490fbebc50e62de9d not found: ID does not exist" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.024905 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80d54d71-4d2f-44ec-bf81-53a184bdb557" (UID: "80d54d71-4d2f-44ec-bf81-53a184bdb557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.028335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data" (OuterVolumeSpecName: "config-data") pod "80d54d71-4d2f-44ec-bf81-53a184bdb557" (UID: "80d54d71-4d2f-44ec-bf81-53a184bdb557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.037022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "80d54d71-4d2f-44ec-bf81-53a184bdb557" (UID: "80d54d71-4d2f-44ec-bf81-53a184bdb557"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.040556 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fxf\" (UniqueName: \"kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070533 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070777 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80d54d71-4d2f-44ec-bf81-53a184bdb557-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070787 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070797 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wczqt\" (UniqueName: \"kubernetes.io/projected/80d54d71-4d2f-44ec-bf81-53a184bdb557-kube-api-access-wczqt\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070807 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.070815 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d54d71-4d2f-44ec-bf81-53a184bdb557-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.172405 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle\") pod \"dca213b7-a7fc-4a51-816e-69bb4b586520\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.172759 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c866\" (UniqueName: \"kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866\") pod \"dca213b7-a7fc-4a51-816e-69bb4b586520\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.172822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data\") pod \"dca213b7-a7fc-4a51-816e-69bb4b586520\" (UID: \"dca213b7-a7fc-4a51-816e-69bb4b586520\") " Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.173867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.173945 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7fxf\" (UniqueName: \"kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.173971 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.174000 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.174068 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.174115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.175060 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.179844 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.186066 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.190944 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.191586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.194953 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.195279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866" (OuterVolumeSpecName: "kube-api-access-2c866") pod "dca213b7-a7fc-4a51-816e-69bb4b586520" (UID: "dca213b7-a7fc-4a51-816e-69bb4b586520"). InnerVolumeSpecName "kube-api-access-2c866". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.204222 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.206401 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7fxf\" (UniqueName: \"kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf\") pod \"nova-api-0\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.219842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: E0219 08:43:12.220590 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca213b7-a7fc-4a51-816e-69bb4b586520" containerName="nova-scheduler-scheduler" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.220698 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca213b7-a7fc-4a51-816e-69bb4b586520" containerName="nova-scheduler-scheduler" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.220994 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca213b7-a7fc-4a51-816e-69bb4b586520" containerName="nova-scheduler-scheduler" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.222333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.225362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.225764 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.227466 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dca213b7-a7fc-4a51-816e-69bb4b586520" (UID: "dca213b7-a7fc-4a51-816e-69bb4b586520"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.243111 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data" (OuterVolumeSpecName: "config-data") pod "dca213b7-a7fc-4a51-816e-69bb4b586520" (UID: "dca213b7-a7fc-4a51-816e-69bb4b586520"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.246277 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.276863 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.276894 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c866\" (UniqueName: \"kubernetes.io/projected/dca213b7-a7fc-4a51-816e-69bb4b586520-kube-api-access-2c866\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.276903 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca213b7-a7fc-4a51-816e-69bb4b586520-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.301697 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.378700 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.378819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.379135 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.379220 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mlc\" (UniqueName: \"kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.379678 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.482387 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.482476 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.482548 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mlc\" (UniqueName: \"kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.482808 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.482926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.483439 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.487165 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.489359 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.497246 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.500785 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mlc\" (UniqueName: \"kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc\") pod \"nova-metadata-0\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.539673 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.776593 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: W0219 08:43:12.779222 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b47d55e_fb13_4f2f_8708_a68119e39b60.slice/crio-c14e7e2efabfd08179963d8420fd6ae68c4fdabeae619e11367cf10d208512cc WatchSource:0}: Error finding container c14e7e2efabfd08179963d8420fd6ae68c4fdabeae619e11367cf10d208512cc: Status 404 returned error can't find the container with id c14e7e2efabfd08179963d8420fd6ae68c4fdabeae619e11367cf10d208512cc Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.878243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerStarted","Data":"c14e7e2efabfd08179963d8420fd6ae68c4fdabeae619e11367cf10d208512cc"} Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.882994 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.883010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dca213b7-a7fc-4a51-816e-69bb4b586520","Type":"ContainerDied","Data":"b57eeb753c63d83d5e9aad7a2b4c3cce7cbb07360b0b47fab5625c7536e7c542"} Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.883070 4780 scope.go:117] "RemoveContainer" containerID="b6f26a651ab0f1793c7bf0c31982d1f809f1110c20c3b9d37e0f0efbe9c0ebe7" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.953057 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.969160 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.980527 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.982033 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.987242 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 08:43:12 crc kubenswrapper[4780]: I0219 08:43:12.989433 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.024536 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:13 crc kubenswrapper[4780]: W0219 08:43:13.030261 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee75a5b9_0f5b_4db0_ab84_e4848bf382a7.slice/crio-c915b48677d2d967daefde99c7f3e2c8ccadc1685b22792fe92915b332d410a7 WatchSource:0}: Error finding container c915b48677d2d967daefde99c7f3e2c8ccadc1685b22792fe92915b332d410a7: Status 404 returned error can't find the container with id c915b48677d2d967daefde99c7f3e2c8ccadc1685b22792fe92915b332d410a7 Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.098188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.098366 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.098426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hchc\" (UniqueName: \"kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.201091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.201250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.201337 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hchc\" (UniqueName: \"kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.204222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.205213 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.230919 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hchc\" (UniqueName: \"kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc\") pod \"nova-scheduler-0\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.308783 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.760967 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.900703 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerStarted","Data":"ba25906e3f30c93cfd93251995fbe6b9adb85d14c0ee7551594e1bd77644bf06"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.900738 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerStarted","Data":"f4e52f5c3cc7e79bbf52ddff38d3ac4f6046c9da7c1d0d4fb161ff87adbe9315"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.900750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerStarted","Data":"c915b48677d2d967daefde99c7f3e2c8ccadc1685b22792fe92915b332d410a7"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.902174 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0213271-e4da-4b8a-a732-b90d74d540ca","Type":"ContainerStarted","Data":"d5ad40aeceebd439882d65541cee715c8751d46f7f14f54243f844cfe7ca722a"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.905424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerStarted","Data":"27850fd9ac7fe009c176f0a9206a0fd99a8d233881001985a4dcd3b476a6ee51"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.905665 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerStarted","Data":"d2450c95cfe8bb926c8e2c2b6644fad3afd770991b68151d38b6cb8f5a8ae9d1"} Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.929683 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.929670824 podStartE2EDuration="1.929670824s" podCreationTimestamp="2026-02-19 08:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:43:13.926318071 +0000 UTC m=+1336.669975520" watchObservedRunningTime="2026-02-19 08:43:13.929670824 +0000 UTC m=+1336.673328273" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.949146 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" path="/var/lib/kubelet/pods/80d54d71-4d2f-44ec-bf81-53a184bdb557/volumes" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.949850 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca213b7-a7fc-4a51-816e-69bb4b586520" path="/var/lib/kubelet/pods/dca213b7-a7fc-4a51-816e-69bb4b586520/volumes" Feb 19 08:43:13 crc kubenswrapper[4780]: I0219 08:43:13.951420 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.95139102 podStartE2EDuration="2.95139102s" podCreationTimestamp="2026-02-19 08:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:43:13.944685855 +0000 UTC m=+1336.688343304" watchObservedRunningTime="2026-02-19 08:43:13.95139102 +0000 UTC m=+1336.695048469" Feb 19 08:43:14 crc kubenswrapper[4780]: I0219 08:43:14.922668 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0213271-e4da-4b8a-a732-b90d74d540ca","Type":"ContainerStarted","Data":"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964"} Feb 19 08:43:14 crc kubenswrapper[4780]: I0219 08:43:14.958213 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.958191642 podStartE2EDuration="2.958191642s" podCreationTimestamp="2026-02-19 08:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:43:14.949977579 +0000 UTC m=+1337.693635068" watchObservedRunningTime="2026-02-19 08:43:14.958191642 +0000 UTC m=+1337.701849101" Feb 19 08:43:16 crc kubenswrapper[4780]: I0219 08:43:16.584446 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:16 crc kubenswrapper[4780]: I0219 08:43:16.584552 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="80d54d71-4d2f-44ec-bf81-53a184bdb557" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": dial tcp 10.217.0.188:8775: i/o timeout" Feb 19 08:43:17 crc kubenswrapper[4780]: I0219 08:43:17.540723 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 08:43:17 crc kubenswrapper[4780]: I0219 08:43:17.540792 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 08:43:18 crc kubenswrapper[4780]: I0219 08:43:18.309143 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 08:43:22 crc kubenswrapper[4780]: I0219 08:43:22.303640 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:43:22 crc kubenswrapper[4780]: I0219 08:43:22.304210 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 08:43:22 crc kubenswrapper[4780]: I0219 08:43:22.540209 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 08:43:22 crc kubenswrapper[4780]: I0219 08:43:22.541877 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.309869 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.322347 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.322343 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.356388 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.551371 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:23 crc kubenswrapper[4780]: I0219 08:43:23.551379 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 08:43:24 crc kubenswrapper[4780]: I0219 08:43:24.037960 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 08:43:24 crc kubenswrapper[4780]: I0219 08:43:24.092510 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.311636 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.313077 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.313978 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.324197 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.546843 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.552169 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 08:43:32 crc kubenswrapper[4780]: I0219 08:43:32.552728 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 08:43:33 crc kubenswrapper[4780]: I0219 08:43:33.120781 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 08:43:33 crc kubenswrapper[4780]: I0219 08:43:33.127855 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 08:43:33 crc kubenswrapper[4780]: I0219 08:43:33.128831 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 08:43:36 crc kubenswrapper[4780]: I0219 08:43:36.336393 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:43:36 crc kubenswrapper[4780]: I0219 08:43:36.336749 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:43:36 crc kubenswrapper[4780]: I0219 08:43:36.336810 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:43:36 crc kubenswrapper[4780]: I0219 08:43:36.337775 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:43:36 crc kubenswrapper[4780]: I0219 08:43:36.337843 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84" gracePeriod=600 Feb 19 08:43:37 crc kubenswrapper[4780]: I0219 08:43:37.164677 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84" exitCode=0 Feb 19 08:43:37 crc kubenswrapper[4780]: I0219 08:43:37.164759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84"} Feb 19 08:43:37 crc kubenswrapper[4780]: I0219 08:43:37.165212 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75"} Feb 19 08:43:37 crc kubenswrapper[4780]: I0219 08:43:37.165234 4780 scope.go:117] "RemoveContainer" containerID="7e9c7d776ebb9cfa84ab86f90613a2614d1dd3b9e07622f097d10b3d1fb58504" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.471407 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l4w4k"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.488341 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l4w4k"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.525771 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.526895 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.528258 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.578604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.615496 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.615713 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="bc29e551-efab-43d8-94d5-1c515a76dca9" containerName="openstackclient" containerID="cri-o://99328aca990c1786f3a96df839e96c76191fce3e843e27d37b4c746716c1d54b" gracePeriod=2 Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.638483 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.662002 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwbn\" (UniqueName: \"kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.662081 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.683193 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:52 crc kubenswrapper[4780]: E0219 08:43:52.683548 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc29e551-efab-43d8-94d5-1c515a76dca9" containerName="openstackclient" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.683565 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc29e551-efab-43d8-94d5-1c515a76dca9" containerName="openstackclient" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.683747 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc29e551-efab-43d8-94d5-1c515a76dca9" containerName="openstackclient" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.696723 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.696819 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.708928 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.726964 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-058b-account-create-update-hd56g"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.759463 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-058b-account-create-update-hd56g"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.763756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwbn\" (UniqueName: \"kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.763836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.764590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.785679 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f929-account-create-update-fx28c"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.806789 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.842193 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.843561 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.852399 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.854892 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwbn\" (UniqueName: \"kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn\") pod \"root-account-create-update-6ws48\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.861479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.868634 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.868718 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq42b\" (UniqueName: \"kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.881081 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f929-account-create-update-fx28c"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.897814 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.899184 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.903251 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.915656 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.968008 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.974149 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcvv\" (UniqueName: \"kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.974221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.974348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.974383 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq42b\" (UniqueName: \"kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:52 crc kubenswrapper[4780]: E0219 08:43:52.974723 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 08:43:52 crc kubenswrapper[4780]: E0219 08:43:52.974773 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data podName:b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d nodeName:}" failed. No retries permitted until 2026-02-19 08:43:53.474756651 +0000 UTC m=+1376.218414100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data") pod "rabbitmq-server-0" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d") : configmap "rabbitmq-config-data" not found Feb 19 08:43:52 crc kubenswrapper[4780]: I0219 08:43:52.976524 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.009641 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq42b\" (UniqueName: \"kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b\") pod \"cinder-058b-account-create-update-vt5kl\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.010299 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eb1b-account-create-update-bvb8p"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.048046 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.094259 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ef86-account-create-update-6fgrs"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.102318 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzcvv\" (UniqueName: \"kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.102577 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd8r\" (UniqueName: \"kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.102904 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.103175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.118224 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-eb1b-account-create-update-bvb8p"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.227647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.232086 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.232316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd8r\" (UniqueName: \"kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.233217 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.239715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzcvv\" (UniqueName: \"kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv\") pod \"neutron-f929-account-create-update-7rdjx\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.264301 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ef86-account-create-update-6fgrs"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.290236 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.291415 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.310554 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.315664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd8r\" (UniqueName: \"kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r\") pod \"barbican-eb1b-account-create-update-g7kmh\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.333497 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.333571 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9sxj\" (UniqueName: \"kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.338401 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.436223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.436512 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9sxj\" (UniqueName: \"kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.437471 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.462207 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.462471 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="ovn-northd" containerID="cri-o://658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" gracePeriod=30 Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.462881 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="openstack-network-exporter" containerID="cri-o://bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980" gracePeriod=30 Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.489827 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9sxj\" (UniqueName: \"kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj\") pod \"placement-7aae-account-create-update-7t6nb\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.506519 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.517271 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rm7g2"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.518844 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.530670 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7aae-account-create-update-q7lfm"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.538177 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rm7g2"] Feb 19 08:43:53 crc kubenswrapper[4780]: E0219 08:43:53.538894 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 08:43:53 crc kubenswrapper[4780]: E0219 08:43:53.538958 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data podName:b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d nodeName:}" failed. No retries permitted until 2026-02-19 08:43:54.538943528 +0000 UTC m=+1377.282600977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data") pod "rabbitmq-server-0" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d") : configmap "rabbitmq-config-data" not found Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.555918 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7aae-account-create-update-q7lfm"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.574174 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xdwdm"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.583410 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xdwdm"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.592205 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vf6dk"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.620295 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.647896 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="openstack-network-exporter" containerID="cri-o://773278a211535ef5f5087e03e60839045ff960f93bda10d3048f86bb4c48be1b" gracePeriod=300 Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.648331 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vf6dk"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.714106 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.765866 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4s9pd"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.793888 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4s9pd"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.836334 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.838192 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.846027 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.846612 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.847718 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.849449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.849521 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szdjs\" (UniqueName: \"kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.852736 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.878048 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.889299 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.908109 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.929361 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="ovsdbserver-sb" containerID="cri-o://1fcfa97a00e63654def305d1d09092ee5032ff068bd514a1b4ff17b1b6859a16" gracePeriod=300 Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.950527 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.950574 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdjs\" (UniqueName: \"kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.950608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8d4h\" (UniqueName: \"kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.950722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.951875 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.987435 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b200855-daef-430e-8967-62c2e51acc86" path="/var/lib/kubelet/pods/0b200855-daef-430e-8967-62c2e51acc86/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.988198 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5fe54c-d700-4f46-9091-f9f3d4bca327" path="/var/lib/kubelet/pods/4a5fe54c-d700-4f46-9091-f9f3d4bca327/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.988716 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550d254b-03a9-46ed-bd17-84aa2b8a690f" path="/var/lib/kubelet/pods/550d254b-03a9-46ed-bd17-84aa2b8a690f/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.996059 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605584b0-916f-400c-a371-801f3eb3daa4" path="/var/lib/kubelet/pods/605584b0-916f-400c-a371-801f3eb3daa4/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.996889 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8049af4a-37d2-4155-924d-74ddba48cde8" path="/var/lib/kubelet/pods/8049af4a-37d2-4155-924d-74ddba48cde8/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.998942 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8ae866-88c8-4503-98ca-751c5ae1ef56" path="/var/lib/kubelet/pods/8a8ae866-88c8-4503-98ca-751c5ae1ef56/volumes" Feb 19 08:43:53 crc kubenswrapper[4780]: I0219 08:43:53.999539 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937a02e9-aead-48c0-9c00-28a327719c18" path="/var/lib/kubelet/pods/937a02e9-aead-48c0-9c00-28a327719c18/volumes" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.028054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdjs\" (UniqueName: \"kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs\") pod \"nova-cell0-623f-account-create-update-5w2cr\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.052457 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.052551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8d4h\" (UniqueName: \"kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.053625 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f0d73a-fdfa-471b-92d4-4433cff2bda8" path="/var/lib/kubelet/pods/b0f0d73a-fdfa-471b-92d4-4433cff2bda8/volumes" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.054558 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f3e284-d8e1-4544-99e7-ac76fe479470" path="/var/lib/kubelet/pods/d0f3e284-d8e1-4544-99e7-ac76fe479470/volumes" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.055177 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c75bcc-eb7e-442e-ae68-288c9c525e73" path="/var/lib/kubelet/pods/e1c75bcc-eb7e-442e-ae68-288c9c525e73/volumes" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.055740 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.062770 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.062851 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data podName:0bc00934-94b1-4be3-8bf4-845ad08a453f nodeName:}" failed. No retries permitted until 2026-02-19 08:43:54.562831393 +0000 UTC m=+1377.306488842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data") pod "rabbitmq-cell1-server-0" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f") : configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.063699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.065256 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.065276 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zjnch"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.065289 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zjnch"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.065303 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rbjfc"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.065367 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.073774 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-rbjfc"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.077200 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.136208 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9c29-account-create-update-hcthl"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.164529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8d4h\" (UniqueName: \"kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h\") pod \"nova-api-9c29-account-create-update-qnlgt\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.165525 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsm22\" (UniqueName: \"kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.165587 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.210192 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9c29-account-create-update-hcthl"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.229905 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.244096 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.272196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsm22\" (UniqueName: \"kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.272335 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.361396 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.369851 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsm22\" (UniqueName: \"kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22\") pod \"nova-cell1-d1c0-account-create-update-6l7vh\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.396500 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.396851 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7677694455-29vx8" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="dnsmasq-dns" containerID="cri-o://a26377ef0467876a0eb785288d7dc23c271d5abc978edde8cc042649188bc179" gracePeriod=10 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.486304 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c517061-49de-445a-955e-006cbf09b6fd" containerID="bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980" exitCode=2 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.486367 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerDied","Data":"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980"} Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.488435 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9/ovsdbserver-sb/0.log" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.488474 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerID="773278a211535ef5f5087e03e60839045ff960f93bda10d3048f86bb4c48be1b" exitCode=2 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.488489 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerID="1fcfa97a00e63654def305d1d09092ee5032ff068bd514a1b4ff17b1b6859a16" exitCode=143 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.488535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerDied","Data":"773278a211535ef5f5087e03e60839045ff960f93bda10d3048f86bb4c48be1b"} Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.488559 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerDied","Data":"1fcfa97a00e63654def305d1d09092ee5032ff068bd514a1b4ff17b1b6859a16"} Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.490204 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ws48" event={"ID":"c5a38bbd-e720-458c-b364-a10afe06f51e","Type":"ContainerStarted","Data":"dacc6c203c48012ed2cb8c6fc91c126cb7481e4fd050101f7d23e80287ff96e3"} Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.519325 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-f2ks8"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.539290 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-f2ks8"] Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.580890 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.580949 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data podName:0bc00934-94b1-4be3-8bf4-845ad08a453f nodeName:}" failed. No retries permitted until 2026-02-19 08:43:55.580933754 +0000 UTC m=+1378.324591203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data") pod "rabbitmq-cell1-server-0" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f") : configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.580984 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.581004 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data podName:b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d nodeName:}" failed. No retries permitted until 2026-02-19 08:43:56.580997486 +0000 UTC m=+1379.324654935 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data") pod "rabbitmq-server-0" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d") : configmap "rabbitmq-config-data" not found Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.590658 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593034 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593549 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-server" containerID="cri-o://cc54bc275542f23253910477331cac8c186c8fe35ad45eef8b4392d021ab1bd6" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593867 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="swift-recon-cron" containerID="cri-o://2188a10120d8bd63dfe375b654626303aec9847b21eac95df015ea5bd7642279" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593908 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="rsync" containerID="cri-o://356b8ecfe832de6a4352add852082b58adb0d47998d76bb8be15bf9b809ca6ba" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593941 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-expirer" containerID="cri-o://c564e5d19b14dd5427df8bd9f7c31fa51688096980104c53a0592b494393444e" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593969 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-updater" containerID="cri-o://7308fc7b05b12c3aded56d1b465656996edb4a1aaac742755f2267f8856a1738" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.593996 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-auditor" containerID="cri-o://931137663cf98f50b611ab7a350ff29c75096b512f3bd8479a3da859ff9249dd" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594022 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-replicator" containerID="cri-o://e346987639804be13b9078ad7625d170b9ddd9507142084a4d071af736f1e9e5" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594057 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-server" containerID="cri-o://cd76a02c1c0dd51248f9e5d74c516ac9964b548ece3e6feef086a11ee77f79b3" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594089 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-updater" containerID="cri-o://1ebc68436865188cf2fad212eae5f1403c1f801aa9e70f545ee26cbab63c85e3" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594145 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-auditor" containerID="cri-o://a84d96cb9630fe50228a686a5d919a25906bd58fdea9bb24ab6c2a2aa7322132" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594178 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-replicator" containerID="cri-o://877371d495b56baac057d9902da78b62d27253e5f8d0b6e010c755c8c3c39f70" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594215 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-server" containerID="cri-o://b664265cd69a21ec8233ee532fb4e98fecc2e465f25adc32ed09379a81449626" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594246 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-reaper" containerID="cri-o://fca6d50f8c31e787a22a0309a33a9f858675c6427c2b2a393662ddffb55cb5cd" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594279 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-auditor" containerID="cri-o://cfdce07de9a5a0bf8e587fc69e8297c91b8913ec02aceb794adf35e345ef0d13" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.594309 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-replicator" containerID="cri-o://4f00dfcd9db180a87181a3c3d01eba45c08d63f7da35c02e1f8eb76e78aba164" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.624575 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.624816 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-f7dn2" podUID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" containerName="openstack-network-exporter" containerID="cri-o://3410061fc16202d0f292ee59bc88124956f67382228dfe5e24ccf6f91f2ce7cf" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.682266 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.685611 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.685928 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f45bb7d89-m7r5b" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-api" containerID="cri-o://e6ad5dd9860e6a4ae8010d509505f12ab7f5487560b9bde69360e238499f4fd4" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.689446 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f45bb7d89-m7r5b" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-httpd" containerID="cri-o://0d36fd403a1d035939e04b1b25e02143c36dd932d5f72c7108d1f6415319ef45" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.785866 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.849595 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.849834 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="cinder-scheduler" containerID="cri-o://7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.850248 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="probe" containerID="cri-o://ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.875190 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:54 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: if [ -n "cinder" ]; then Feb 19 08:43:54 crc kubenswrapper[4780]: GRANT_DATABASE="cinder" Feb 19 08:43:54 crc kubenswrapper[4780]: else Feb 19 08:43:54 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:54 crc kubenswrapper[4780]: fi Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:54 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:54 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:54 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:54 crc kubenswrapper[4780]: # support updates Feb 19 08:43:54 crc kubenswrapper[4780]: Feb 19 08:43:54 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:54 crc kubenswrapper[4780]: E0219 08:43:54.876684 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-058b-account-create-update-vt5kl" podUID="d20f2167-7640-4e39-95cc-4007180d1e49" Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.927511 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.927835 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-log" containerID="cri-o://4162407cdf5682d804be4b4717c823043ddf3ab7e9943293c605618e3930edf7" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.928402 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-httpd" containerID="cri-o://bcacaffefa0805038ec68a239723691428dbbee367f236e1e7e7b362dd644e5e" gracePeriod=30 Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.941390 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-jzdls"] Feb 19 08:43:54 crc kubenswrapper[4780]: I0219 08:43:54.966560 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-jzdls"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.003907 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.004245 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api-log" containerID="cri-o://9b449827662b36d6c9c93dd1e51b9613703943aec7408ea0336acf021bd59ad8" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.004814 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api" containerID="cri-o://7b196fe6ed67411ad242dd795c03caa7fd2feb9dbef00ff8b65a8ef8e03b4da0" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.025187 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.025503 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="openstack-network-exporter" containerID="cri-o://e1c828e53372b01ed0b60ce38962b074fb08a9c3280ed7aac4fbb5ce93ddbb17" gracePeriod=300 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.089559 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.149106 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="ovsdbserver-nb" containerID="cri-o://fa846a4869b3acb10eb8a21741ba18e7569c9f2c7d5f843a9c97988ef168cc97" gracePeriod=300 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.179775 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.180163 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-log" containerID="cri-o://511ae1a6b95e07069083114a3d15f66169e2683396feb32e7d98594881f3165c" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.180293 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-httpd" containerID="cri-o://470d613c3f2933cabeb420246069bef8c1516a00e6cebf54fd8f45fec126403e" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.224236 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9/ovsdbserver-sb/0.log" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.224296 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.247638 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.247868 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78d56d997b-gx5gk" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-log" containerID="cri-o://4a2deac91a19b42884cc5219eea805083e84f812e11c4c1c5d92ca54a8646ebf" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.248325 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78d56d997b-gx5gk" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-api" containerID="cri-o://c021cdf9bb1cdb9fed0336ae04d70630c86309a60b2593d61c63d06c8b046dcd" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.283180 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-x87w6"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.291202 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-x87w6"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.311490 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4q56p"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.334785 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4q56p"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335436 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335539 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335573 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335619 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zsx5\" (UniqueName: \"kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.335693 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\" (UID: \"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.342510 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config" (OuterVolumeSpecName: "config") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.343175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.347254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts" (OuterVolumeSpecName: "scripts") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.362183 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4hz99"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.368972 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4hz99"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.375367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5" (OuterVolumeSpecName: "kube-api-access-4zsx5") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "kube-api-access-4zsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.378514 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.388587 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-th9tx"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.398992 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-th9tx"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.419080 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.439264 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.440779 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.440811 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.440824 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.440837 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zsx5\" (UniqueName: \"kubernetes.io/projected/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-kube-api-access-4zsx5\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.440861 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.440891 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:55 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: if [ -n "placement" ]; then Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="placement" Feb 19 08:43:55 crc kubenswrapper[4780]: else Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:55 crc kubenswrapper[4780]: fi Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:55 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:55 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:55 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:55 crc kubenswrapper[4780]: # support updates Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.442281 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-7aae-account-create-update-7t6nb" podUID="1b8f6013-5488-4922-94d6-167007269739" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.443593 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.462195 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v7869"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.479073 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.490398 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v7869"] Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.507736 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:55 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: if [ -n "barbican" ]; then Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="barbican" Feb 19 08:43:55 crc kubenswrapper[4780]: else Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:55 crc kubenswrapper[4780]: fi Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:55 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:55 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:55 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:55 crc kubenswrapper[4780]: # support updates Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.509853 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-eb1b-account-create-update-g7kmh" podUID="8e18e815-82e7-4dba-a607-1e3ba75b98f6" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.523111 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9t6f2"] Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.529940 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:55 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: if [ -n "neutron" ]; then Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="neutron" Feb 19 08:43:55 crc kubenswrapper[4780]: else Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:55 crc kubenswrapper[4780]: fi Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:55 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:55 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:55 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:55 crc kubenswrapper[4780]: # support updates Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.531169 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-f929-account-create-update-7rdjx" podUID="c180e0b2-79c3-49b7-bac3-f868aeebd2cc" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.538178 4780 generic.go:334] "Generic (PLEG): container finished" podID="80168270-a6db-4ef2-833b-5d2eb2781779" containerID="4a2deac91a19b42884cc5219eea805083e84f812e11c4c1c5d92ca54a8646ebf" exitCode=143 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.538257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerDied","Data":"4a2deac91a19b42884cc5219eea805083e84f812e11c4c1c5d92ca54a8646ebf"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.544757 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.544792 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.553085 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7aae-account-create-update-7t6nb" event={"ID":"1b8f6013-5488-4922-94d6-167007269739","Type":"ContainerStarted","Data":"51fb3873321e3b7595a0316cdf20efd4dddaa4f6100f7d2c62bd9f4a3a5757bb"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.558203 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.568916 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9t6f2"] Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.574041 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:55 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: if [ -n "placement" ]; then Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="placement" Feb 19 08:43:55 crc kubenswrapper[4780]: else Feb 19 08:43:55 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:55 crc kubenswrapper[4780]: fi Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:55 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:55 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:55 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:55 crc kubenswrapper[4780]: # support updates Feb 19 08:43:55 crc kubenswrapper[4780]: Feb 19 08:43:55 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.576167 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-7aae-account-create-update-7t6nb" podUID="1b8f6013-5488-4922-94d6-167007269739" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.578650 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c875b359-e76d-4fd0-99fb-10c8b04dfb35/ovsdbserver-nb/0.log" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.578706 4780 generic.go:334] "Generic (PLEG): container finished" podID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerID="e1c828e53372b01ed0b60ce38962b074fb08a9c3280ed7aac4fbb5ce93ddbb17" exitCode=2 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.578725 4780 generic.go:334] "Generic (PLEG): container finished" podID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerID="fa846a4869b3acb10eb8a21741ba18e7569c9f2c7d5f843a9c97988ef168cc97" exitCode=143 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.578798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerDied","Data":"e1c828e53372b01ed0b60ce38962b074fb08a9c3280ed7aac4fbb5ce93ddbb17"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.578845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerDied","Data":"fa846a4869b3acb10eb8a21741ba18e7569c9f2c7d5f843a9c97988ef168cc97"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.579798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-g7kmh" event={"ID":"8e18e815-82e7-4dba-a607-1e3ba75b98f6","Type":"ContainerStarted","Data":"4d4d6fd362c3f87a15939e65179bf75de1b0f458c087299b0d2d1abd69aee996"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.593312 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.609220 4780 generic.go:334] "Generic (PLEG): container finished" podID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerID="a26377ef0467876a0eb785288d7dc23c271d5abc978edde8cc042649188bc179" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.609549 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-29vx8" event={"ID":"c0d666c4-abfe-4b46-90db-1fd272d8adb4","Type":"ContainerDied","Data":"a26377ef0467876a0eb785288d7dc23c271d5abc978edde8cc042649188bc179"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.611282 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.611558 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84f494b65f-swr5f" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker-log" containerID="cri-o://33372ea022a8bd6de99a3d6f15e51d7ba430019ef7b27207983d49036151c801" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.611712 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84f494b65f-swr5f" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker" containerID="cri-o://6cc78ab8f7b9e9df271b1241208a5165a0e1b133172de580b0941a07a1cbbb55" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.642433 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.642682 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener-log" containerID="cri-o://cf73772a4d01bf87fe0b1f3121d3412df9da363dc17c7d7d04a7882814ffc9ad" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.643037 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener" containerID="cri-o://7edbc265ca1fca9fec89c4aa613f291e13e679212a1411e4d048f4165e32dd71" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.659118 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.659858 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.659921 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data podName:0bc00934-94b1-4be3-8bf4-845ad08a453f nodeName:}" failed. No retries permitted until 2026-02-19 08:43:57.65990241 +0000 UTC m=+1380.403559859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data") pod "rabbitmq-cell1-server-0" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f") : configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.665216 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.666815 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" containerID="cri-o://b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.667091 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" containerID="cri-o://253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.685902 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gbfmd"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.698788 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.699294 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api" containerID="cri-o://516c8de3c33c0337fd76fb32a1510070e91e8a75deedbf4866e716ba08c4c8aa" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.699059 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api-log" containerID="cri-o://53eecf6f3abbe44f7e06ac0af7e4deebaf1979eb160ea1159e05a543ac4aea01" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.703511 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.722243 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" (UID: "4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.724290 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gbfmd"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.732406 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.738157 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="rabbitmq" containerID="cri-o://7c909b0dbce18b4a1334fd4ddf863413080b8c52e4f0a329f074299164d924ec" gracePeriod=604800 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.758018 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.767968 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.774408 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.788659 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.788913 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" containerID="cri-o://f4e52f5c3cc7e79bbf52ddff38d3ac4f6046c9da7c1d0d4fb161ff87adbe9315" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.789382 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" containerID="cri-o://ba25906e3f30c93cfd93251995fbe6b9adb85d14c0ee7551594e1bd77644bf06" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805032 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="356b8ecfe832de6a4352add852082b58adb0d47998d76bb8be15bf9b809ca6ba" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805276 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="c564e5d19b14dd5427df8bd9f7c31fa51688096980104c53a0592b494393444e" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805284 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="7308fc7b05b12c3aded56d1b465656996edb4a1aaac742755f2267f8856a1738" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805290 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="931137663cf98f50b611ab7a350ff29c75096b512f3bd8479a3da859ff9249dd" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805296 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="e346987639804be13b9078ad7625d170b9ddd9507142084a4d071af736f1e9e5" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805302 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="cd76a02c1c0dd51248f9e5d74c516ac9964b548ece3e6feef086a11ee77f79b3" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805309 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="1ebc68436865188cf2fad212eae5f1403c1f801aa9e70f545ee26cbab63c85e3" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805316 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="a84d96cb9630fe50228a686a5d919a25906bd58fdea9bb24ab6c2a2aa7322132" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805324 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="877371d495b56baac057d9902da78b62d27253e5f8d0b6e010c755c8c3c39f70" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805331 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="b664265cd69a21ec8233ee532fb4e98fecc2e465f25adc32ed09379a81449626" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805337 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="fca6d50f8c31e787a22a0309a33a9f858675c6427c2b2a393662ddffb55cb5cd" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805344 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="cfdce07de9a5a0bf8e587fc69e8297c91b8913ec02aceb794adf35e345ef0d13" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805351 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="4f00dfcd9db180a87181a3c3d01eba45c08d63f7da35c02e1f8eb76e78aba164" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805357 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="cc54bc275542f23253910477331cac8c186c8fe35ad45eef8b4392d021ab1bd6" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805420 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805445 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"356b8ecfe832de6a4352add852082b58adb0d47998d76bb8be15bf9b809ca6ba"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805462 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"c564e5d19b14dd5427df8bd9f7c31fa51688096980104c53a0592b494393444e"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"7308fc7b05b12c3aded56d1b465656996edb4a1aaac742755f2267f8856a1738"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"931137663cf98f50b611ab7a350ff29c75096b512f3bd8479a3da859ff9249dd"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"e346987639804be13b9078ad7625d170b9ddd9507142084a4d071af736f1e9e5"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805499 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"cd76a02c1c0dd51248f9e5d74c516ac9964b548ece3e6feef086a11ee77f79b3"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805510 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"1ebc68436865188cf2fad212eae5f1403c1f801aa9e70f545ee26cbab63c85e3"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"a84d96cb9630fe50228a686a5d919a25906bd58fdea9bb24ab6c2a2aa7322132"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"877371d495b56baac057d9902da78b62d27253e5f8d0b6e010c755c8c3c39f70"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"b664265cd69a21ec8233ee532fb4e98fecc2e465f25adc32ed09379a81449626"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"fca6d50f8c31e787a22a0309a33a9f858675c6427c2b2a393662ddffb55cb5cd"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805554 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"cfdce07de9a5a0bf8e587fc69e8297c91b8913ec02aceb794adf35e345ef0d13"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"4f00dfcd9db180a87181a3c3d01eba45c08d63f7da35c02e1f8eb76e78aba164"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.805571 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"cc54bc275542f23253910477331cac8c186c8fe35ad45eef8b4392d021ab1bd6"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.814347 4780 generic.go:334] "Generic (PLEG): container finished" podID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerID="0d36fd403a1d035939e04b1b25e02143c36dd932d5f72c7108d1f6415319ef45" exitCode=0 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.815207 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.815255 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerDied","Data":"0d36fd403a1d035939e04b1b25e02143c36dd932d5f72c7108d1f6415319ef45"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.819005 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerID="511ae1a6b95e07069083114a3d15f66169e2683396feb32e7d98594881f3165c" exitCode=143 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.819065 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerDied","Data":"511ae1a6b95e07069083114a3d15f66169e2683396feb32e7d98594881f3165c"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.821658 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4cjwm"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.830285 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4cjwm"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.837207 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-w4nft"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.843932 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-w4nft"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.849608 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.849615 4780 generic.go:334] "Generic (PLEG): container finished" podID="bc29e551-efab-43d8-94d5-1c515a76dca9" containerID="99328aca990c1786f3a96df839e96c76191fce3e843e27d37b4c746716c1d54b" exitCode=137 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.849714 4780 scope.go:117] "RemoveContainer" containerID="99328aca990c1786f3a96df839e96c76191fce3e843e27d37b4c746716c1d54b" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.851210 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.851429 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-log" containerID="cri-o://d2450c95cfe8bb926c8e2c2b6644fad3afd770991b68151d38b6cb8f5a8ae9d1" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.851763 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-api" containerID="cri-o://27850fd9ac7fe009c176f0a9206a0fd99a8d233881001985a4dcd3b476a6ee51" gracePeriod=30 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.862413 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.864715 4780 generic.go:334] "Generic (PLEG): container finished" podID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerID="9ce173c0a606bf17ab90d5b9a0e2f16da44fdff99e89dc2f4054023128c6d461" exitCode=1 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.865033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ws48" event={"ID":"c5a38bbd-e720-458c-b364-a10afe06f51e","Type":"ContainerDied","Data":"9ce173c0a606bf17ab90d5b9a0e2f16da44fdff99e89dc2f4054023128c6d461"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.865164 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-6ws48" secret="" err="secret \"galera-openstack-cell1-dockercfg-pbdsv\" not found" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.865199 4780 scope.go:117] "RemoveContainer" containerID="9ce173c0a606bf17ab90d5b9a0e2f16da44fdff99e89dc2f4054023128c6d461" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870102 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config\") pod \"bc29e551-efab-43d8-94d5-1c515a76dca9\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870201 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret\") pod \"bc29e551-efab-43d8-94d5-1c515a76dca9\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4zj\" (UniqueName: \"kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj\") pod \"bc29e551-efab-43d8-94d5-1c515a76dca9\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870400 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqf5\" (UniqueName: \"kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870478 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870551 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0\") pod \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\" (UID: \"c0d666c4-abfe-4b46-90db-1fd272d8adb4\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.870610 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle\") pod \"bc29e551-efab-43d8-94d5-1c515a76dca9\" (UID: \"bc29e551-efab-43d8-94d5-1c515a76dca9\") " Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.881702 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.882063 4780 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 08:43:55 crc kubenswrapper[4780]: E0219 08:43:55.882105 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts podName:c5a38bbd-e720-458c-b364-a10afe06f51e nodeName:}" failed. No retries permitted until 2026-02-19 08:43:56.382093601 +0000 UTC m=+1379.125751050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts") pod "root-account-create-update-6ws48" (UID: "c5a38bbd-e720-458c-b364-a10afe06f51e") : configmap "openstack-cell1-scripts" not found Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.906384 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kd78h"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.907190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5" (OuterVolumeSpecName: "kube-api-access-lkqf5") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "kube-api-access-lkqf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.907239 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9/ovsdbserver-sb/0.log" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.907321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9","Type":"ContainerDied","Data":"8d9e72110121a571cbeddde60065cff40785696f809e8ebc480734d27f4198a2"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.907455 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.925513 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kd78h"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.927037 4780 generic.go:334] "Generic (PLEG): container finished" podID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerID="4162407cdf5682d804be4b4717c823043ddf3ab7e9943293c605618e3930edf7" exitCode=143 Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.927107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerDied","Data":"4162407cdf5682d804be4b4717c823043ddf3ab7e9943293c605618e3930edf7"} Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.937019 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.952864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj" (OuterVolumeSpecName: "kube-api-access-6v4zj") pod "bc29e551-efab-43d8-94d5-1c515a76dca9" (UID: "bc29e551-efab-43d8-94d5-1c515a76dca9"). InnerVolumeSpecName "kube-api-access-6v4zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.963237 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4139c0c2-1d42-4e2d-89ac-240b1719eb16" path="/var/lib/kubelet/pods/4139c0c2-1d42-4e2d-89ac-240b1719eb16/volumes" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.967012 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4825dc1d-ef7f-4ab2-a873-1072afe8e515" path="/var/lib/kubelet/pods/4825dc1d-ef7f-4ab2-a873-1072afe8e515/volumes" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.968778 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490019fb-c322-4355-b6c6-5eb9eaba34ca" path="/var/lib/kubelet/pods/490019fb-c322-4355-b6c6-5eb9eaba34ca/volumes" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.970654 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bc781e-12f1-4fd2-9bcb-fa3021a33e60" path="/var/lib/kubelet/pods/57bc781e-12f1-4fd2-9bcb-fa3021a33e60/volumes" Feb 19 08:43:55 crc kubenswrapper[4780]: I0219 08:43:55.978997 4780 generic.go:334] "Generic (PLEG): container finished" podID="041edb21-581b-493e-a2f1-09e0b3559df1" containerID="9b449827662b36d6c9c93dd1e51b9613703943aec7408ea0336acf021bd59ad8" exitCode=143 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.011682 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bc29e551-efab-43d8-94d5-1c515a76dca9" (UID: "bc29e551-efab-43d8-94d5-1c515a76dca9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.032970 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.032997 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4zj\" (UniqueName: \"kubernetes.io/projected/bc29e551-efab-43d8-94d5-1c515a76dca9-kube-api-access-6v4zj\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.033006 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqf5\" (UniqueName: \"kubernetes.io/projected/c0d666c4-abfe-4b46-90db-1fd272d8adb4-kube-api-access-lkqf5\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.033360 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636fc704-8df3-4d54-98e0-6976bbf071b2" path="/var/lib/kubelet/pods/636fc704-8df3-4d54-98e0-6976bbf071b2/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.034151 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b6be5d-b212-4ef8-8bed-3e9e4337a0bf" path="/var/lib/kubelet/pods/67b6be5d-b212-4ef8-8bed-3e9e4337a0bf/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.040616 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f7dn2_b1c0ef0e-b38d-48e6-b006-8e528c70ff18/openstack-network-exporter/0.log" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.040655 4780 generic.go:334] "Generic (PLEG): container finished" podID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" containerID="3410061fc16202d0f292ee59bc88124956f67382228dfe5e24ccf6f91f2ce7cf" exitCode=2 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.043923 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690f441d-627e-4fc9-aee6-069a9d11946f" path="/var/lib/kubelet/pods/690f441d-627e-4fc9-aee6-069a9d11946f/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.044714 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873052c3-b896-4852-a0be-8c7f4b1edbf0" path="/var/lib/kubelet/pods/873052c3-b896-4852-a0be-8c7f4b1edbf0/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.045524 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cef886d-8b12-490f-9860-de378fc3d6fb" path="/var/lib/kubelet/pods/9cef886d-8b12-490f-9860-de378fc3d6fb/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.046591 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aace9000-22e3-4d6f-98b5-c7ce0c39f31c" path="/var/lib/kubelet/pods/aace9000-22e3-4d6f-98b5-c7ce0c39f31c/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.047166 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dbddd7-97e0-495e-8cc0-326e18ea5843" path="/var/lib/kubelet/pods/b5dbddd7-97e0-495e-8cc0-326e18ea5843/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.067303 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33d5818-5750-4ac6-9016-e886177a9b4e" path="/var/lib/kubelet/pods/c33d5818-5750-4ac6-9016-e886177a9b4e/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.068485 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc03068-08f3-4ded-9521-145d162f2053" path="/var/lib/kubelet/pods/cbc03068-08f3-4ded-9521-145d162f2053/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.069309 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6bf7c0-1e73-447f-be82-7c45be42304b" path="/var/lib/kubelet/pods/fe6bf7c0-1e73-447f-be82-7c45be42304b/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.070256 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="rabbitmq" containerID="cri-o://f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156" gracePeriod=604800 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.070424 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8532ec-1e86-48f6-8446-3ff490756edd" path="/var/lib/kubelet/pods/fe8532ec-1e86-48f6-8446-3ff490756edd/volumes" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.079609 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc29e551-efab-43d8-94d5-1c515a76dca9" (UID: "bc29e551-efab-43d8-94d5-1c515a76dca9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.108237 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.120024 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config" (OuterVolumeSpecName: "config") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.132168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.135978 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.136002 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.136011 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.136020 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.136199 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="galera" containerID="cri-o://5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" gracePeriod=30 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.138621 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bc29e551-efab-43d8-94d5-1c515a76dca9" (UID: "bc29e551-efab-43d8-94d5-1c515a76dca9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165586 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165629 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165643 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165685 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerDied","Data":"9b449827662b36d6c9c93dd1e51b9613703943aec7408ea0336acf021bd59ad8"} Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165707 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-vt5kl" event={"ID":"d20f2167-7640-4e39-95cc-4007180d1e49","Type":"ContainerStarted","Data":"67ebb44ed3c129d979e6f96334d651d6f2f1015e98a0e7a6c5ed0fbb10d27232"} Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f7dn2" event={"ID":"b1c0ef0e-b38d-48e6-b006-8e528c70ff18","Type":"ContainerDied","Data":"3410061fc16202d0f292ee59bc88124956f67382228dfe5e24ccf6f91f2ce7cf"} Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165756 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165774 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165792 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bv9tc"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.165803 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bv9tc"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.166005 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.166019 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r6stw"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.166027 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.166039 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r6stw"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.166764 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerName="nova-cell0-conductor-conductor" containerID="cri-o://db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" gracePeriod=30 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.167504 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerName="nova-scheduler-scheduler" containerID="cri-o://641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" gracePeriod=30 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.167877 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" gracePeriod=30 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.168184 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8054af60ddd0b374a2c0f62a832f4a309e45631fbcb23191918b751a178136ea" gracePeriod=30 Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.199583 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f7dn2_b1c0ef0e-b38d-48e6-b006-8e528c70ff18/openstack-network-exporter/0.log" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.199909 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.204936 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.207043 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0d666c4-abfe-4b46-90db-1fd272d8adb4" (UID: "c0d666c4-abfe-4b46-90db-1fd272d8adb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.245350 4780 scope.go:117] "RemoveContainer" containerID="773278a211535ef5f5087e03e60839045ff960f93bda10d3048f86bb4c48be1b" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.248440 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc29e551-efab-43d8-94d5-1c515a76dca9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.248463 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.248473 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d666c4-abfe-4b46-90db-1fd272d8adb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.312080 4780 scope.go:117] "RemoveContainer" containerID="1fcfa97a00e63654def305d1d09092ee5032ff068bd514a1b4ff17b1b6859a16" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.354953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.355019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.355040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l64fg\" (UniqueName: \"kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.355065 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.356662 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.356705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir\") pod \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\" (UID: \"b1c0ef0e-b38d-48e6-b006-8e528c70ff18\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.357481 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.360238 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config" (OuterVolumeSpecName: "config") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.360310 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.370221 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c875b359-e76d-4fd0-99fb-10c8b04dfb35/ovsdbserver-nb/0.log" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.370300 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.360345 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.370715 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.381556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg" (OuterVolumeSpecName: "kube-api-access-l64fg") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "kube-api-access-l64fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.432660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.458962 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459065 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztkl\" (UniqueName: \"kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459676 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459788 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459850 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459877 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459940 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.459996 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle\") pod \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\" (UID: \"c875b359-e76d-4fd0-99fb-10c8b04dfb35\") " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.460099 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts" (OuterVolumeSpecName: "scripts") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.460848 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config" (OuterVolumeSpecName: "config") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.462798 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: E0219 08:43:56.466047 4780 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 08:43:56 crc kubenswrapper[4780]: E0219 08:43:56.466132 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts podName:c5a38bbd-e720-458c-b364-a10afe06f51e nodeName:}" failed. No retries permitted until 2026-02-19 08:43:57.46610436 +0000 UTC m=+1380.209761809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts") pod "root-account-create-update-6ws48" (UID: "c5a38bbd-e720-458c-b364-a10afe06f51e") : configmap "openstack-cell1-scripts" not found Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.466306 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472447 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472488 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472499 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472511 4780 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472538 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l64fg\" (UniqueName: \"kubernetes.io/projected/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-kube-api-access-l64fg\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472549 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472559 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c875b359-e76d-4fd0-99fb-10c8b04dfb35-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472589 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.472611 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.512708 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:43:56 crc kubenswrapper[4780]: E0219 08:43:56.542939 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:56 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: if [ -n "nova_api" ]; then Feb 19 08:43:56 crc kubenswrapper[4780]: GRANT_DATABASE="nova_api" Feb 19 08:43:56 crc kubenswrapper[4780]: else Feb 19 08:43:56 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:56 crc kubenswrapper[4780]: fi Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:56 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:56 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:56 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:56 crc kubenswrapper[4780]: # support updates Feb 19 08:43:56 crc kubenswrapper[4780]: Feb 19 08:43:56 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.543547 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:56 crc kubenswrapper[4780]: E0219 08:43:56.544196 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-9c29-account-create-update-qnlgt" podUID="06f6487b-6488-44c0-b2de-2a7f8955a46a" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.547758 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.568387 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl" (OuterVolumeSpecName: "kube-api-access-tztkl") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "kube-api-access-tztkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.577643 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.583304 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztkl\" (UniqueName: \"kubernetes.io/projected/c875b359-e76d-4fd0-99fb-10c8b04dfb35-kube-api-access-tztkl\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:56 crc kubenswrapper[4780]: I0219 08:43:56.595254 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.687256 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntd8r\" (UniqueName: \"kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r\") pod \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.687363 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts\") pod \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\" (UID: \"8e18e815-82e7-4dba-a607-1e3ba75b98f6\") " Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.688149 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.688209 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.688255 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data podName:b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d nodeName:}" failed. No retries permitted until 2026-02-19 08:44:00.688239759 +0000 UTC m=+1383.431897208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data") pod "rabbitmq-server-0" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d") : configmap "rabbitmq-config-data" not found Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.691706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e18e815-82e7-4dba-a607-1e3ba75b98f6" (UID: "8e18e815-82e7-4dba-a607-1e3ba75b98f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.697115 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r" (OuterVolumeSpecName: "kube-api-access-ntd8r") pod "8e18e815-82e7-4dba-a607-1e3ba75b98f6" (UID: "8e18e815-82e7-4dba-a607-1e3ba75b98f6"). InnerVolumeSpecName "kube-api-access-ntd8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.707540 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b1c0ef0e-b38d-48e6-b006-8e528c70ff18" (UID: "b1c0ef0e-b38d-48e6-b006-8e528c70ff18"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.710290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.760090 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.769747 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.789441 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c0ef0e-b38d-48e6-b006-8e528c70ff18-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.789468 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.789478 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntd8r\" (UniqueName: \"kubernetes.io/projected/8e18e815-82e7-4dba-a607-1e3ba75b98f6-kube-api-access-ntd8r\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.789488 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e18e815-82e7-4dba-a607-1e3ba75b98f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.820518 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c875b359-e76d-4fd0-99fb-10c8b04dfb35" (UID: "c875b359-e76d-4fd0-99fb-10c8b04dfb35"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: W0219 08:43:56.832393 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c4ea3e_9190_44e9_8cd1_fa2ecce7e5d4.slice/crio-2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8 WatchSource:0}: Error finding container 2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8: Status 404 returned error can't find the container with id 2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.838717 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.866058 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:57 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: if [ -n "nova_cell0" ]; then Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="nova_cell0" Feb 19 08:43:57 crc kubenswrapper[4780]: else Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:57 crc kubenswrapper[4780]: fi Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:57 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:57 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:57 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:57 crc kubenswrapper[4780]: # support updates Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.869966 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" podUID="93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.886058 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:57 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: if [ -n "nova_cell1" ]; then Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="nova_cell1" Feb 19 08:43:57 crc kubenswrapper[4780]: else Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:57 crc kubenswrapper[4780]: fi Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:57 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:57 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:57 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:57 crc kubenswrapper[4780]: # support updates Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:56.887665 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" podUID="6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.890871 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq42b\" (UniqueName: \"kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b\") pod \"d20f2167-7640-4e39-95cc-4007180d1e49\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.891693 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts\") pod \"d20f2167-7640-4e39-95cc-4007180d1e49\" (UID: \"d20f2167-7640-4e39-95cc-4007180d1e49\") " Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.892634 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c875b359-e76d-4fd0-99fb-10c8b04dfb35-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.895305 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d20f2167-7640-4e39-95cc-4007180d1e49" (UID: "d20f2167-7640-4e39-95cc-4007180d1e49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.907517 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b" (OuterVolumeSpecName: "kube-api-access-fq42b") pod "d20f2167-7640-4e39-95cc-4007180d1e49" (UID: "d20f2167-7640-4e39-95cc-4007180d1e49"). InnerVolumeSpecName "kube-api-access-fq42b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.994780 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d20f2167-7640-4e39-95cc-4007180d1e49-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:56.994801 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq42b\" (UniqueName: \"kubernetes.io/projected/d20f2167-7640-4e39-95cc-4007180d1e49-kube-api-access-fq42b\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.003752 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.006820 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.016501 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.016570 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="galera" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.072435 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerID="d2450c95cfe8bb926c8e2c2b6644fad3afd770991b68151d38b6cb8f5a8ae9d1" exitCode=143 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.072531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerDied","Data":"d2450c95cfe8bb926c8e2c2b6644fad3afd770991b68151d38b6cb8f5a8ae9d1"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.080172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eb1b-account-create-update-g7kmh" event={"ID":"8e18e815-82e7-4dba-a607-1e3ba75b98f6","Type":"ContainerDied","Data":"4d4d6fd362c3f87a15939e65179bf75de1b0f458c087299b0d2d1abd69aee996"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.080264 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eb1b-account-create-update-g7kmh" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.081947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" event={"ID":"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4","Type":"ContainerStarted","Data":"2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.095310 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.116542 4780 generic.go:334] "Generic (PLEG): container finished" podID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerID="ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5" exitCode=0 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.116683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerDied","Data":"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.149822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-058b-account-create-update-vt5kl" event={"ID":"d20f2167-7640-4e39-95cc-4007180d1e49","Type":"ContainerDied","Data":"67ebb44ed3c129d979e6f96334d651d6f2f1015e98a0e7a6c5ed0fbb10d27232"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.149934 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-058b-account-create-update-vt5kl" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.209341 4780 generic.go:334] "Generic (PLEG): container finished" podID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerID="33372ea022a8bd6de99a3d6f15e51d7ba430019ef7b27207983d49036151c801" exitCode=143 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.209420 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerDied","Data":"33372ea022a8bd6de99a3d6f15e51d7ba430019ef7b27207983d49036151c801"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.212901 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.213384 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-565f58cc6f-vwtvf" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-httpd" containerID="cri-o://b2d583309f49dd49d17c80f25a85efc092769c4ff96637acdd9473aa868d7556" gracePeriod=30 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.213390 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-565f58cc6f-vwtvf" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-server" containerID="cri-o://a452ac5cf46573cac8666add1030829b2829b3e8372bbc020c65c25f15121df8" gracePeriod=30 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.222166 4780 generic.go:334] "Generic (PLEG): container finished" podID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" containerID="8054af60ddd0b374a2c0f62a832f4a309e45631fbcb23191918b751a178136ea" exitCode=0 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.222278 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7fa9686-243a-4fbe-ba17-93f9e4aa822c","Type":"ContainerDied","Data":"8054af60ddd0b374a2c0f62a832f4a309e45631fbcb23191918b751a178136ea"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.226670 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d459ce0-3049-4b3a-a076-682771965fc2" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" exitCode=0 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.226763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerDied","Data":"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.229223 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c29-account-create-update-qnlgt" event={"ID":"06f6487b-6488-44c0-b2de-2a7f8955a46a","Type":"ContainerStarted","Data":"182d87c703c0082ae00e8936c75673a631b946f2610b8077815f648e19461ae6"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.260686 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.269301 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-eb1b-account-create-update-g7kmh"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.275762 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c875b359-e76d-4fd0-99fb-10c8b04dfb35/ovsdbserver-nb/0.log" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.276272 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.288672 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.288710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c875b359-e76d-4fd0-99fb-10c8b04dfb35","Type":"ContainerDied","Data":"4cd2a790e9ced9e9f98c43521f5bde9840c3d1cc6d36d437ecc0d00d02c54f0c"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.288740 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-058b-account-create-update-vt5kl"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.288763 4780 scope.go:117] "RemoveContainer" containerID="e1c828e53372b01ed0b60ce38962b074fb08a9c3280ed7aac4fbb5ce93ddbb17" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.306940 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f7dn2_b1c0ef0e-b38d-48e6-b006-8e528c70ff18/openstack-network-exporter/0.log" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.307007 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f7dn2" event={"ID":"b1c0ef0e-b38d-48e6-b006-8e528c70ff18","Type":"ContainerDied","Data":"391e44d34890f5953047c2ae14116791b4fc3def25c357ccc72c4a7f8a31e01a"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.307077 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f7dn2" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.317688 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-29vx8" event={"ID":"c0d666c4-abfe-4b46-90db-1fd272d8adb4","Type":"ContainerDied","Data":"3026de95c972f644dcc6975e685130bb67a66ef8e249d17cd63109465aaa4192"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.317818 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-29vx8" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.335148 4780 generic.go:334] "Generic (PLEG): container finished" podID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerID="cf73772a4d01bf87fe0b1f3121d3412df9da363dc17c7d7d04a7882814ffc9ad" exitCode=143 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.335205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerDied","Data":"cf73772a4d01bf87fe0b1f3121d3412df9da363dc17c7d7d04a7882814ffc9ad"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.339838 4780 generic.go:334] "Generic (PLEG): container finished" podID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerID="473462e223ca6549c8137f0111bdd6185533438f8cbe7f0b6f647848e628f02b" exitCode=1 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.339895 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ws48" event={"ID":"c5a38bbd-e720-458c-b364-a10afe06f51e","Type":"ContainerDied","Data":"473462e223ca6549c8137f0111bdd6185533438f8cbe7f0b6f647848e628f02b"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.344160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f929-account-create-update-7rdjx" event={"ID":"c180e0b2-79c3-49b7-bac3-f868aeebd2cc","Type":"ContainerStarted","Data":"1c14ad179e653523b597c68e8e4958feb89ca479da7d4cc40224fcdd41bd5879"} Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.346224 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:57 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: if [ -n "neutron" ]; then Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="neutron" Feb 19 08:43:57 crc kubenswrapper[4780]: else Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:57 crc kubenswrapper[4780]: fi Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:57 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:57 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:57 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:57 crc kubenswrapper[4780]: # support updates Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.353145 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-f929-account-create-update-7rdjx" podUID="c180e0b2-79c3-49b7-bac3-f868aeebd2cc" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.378600 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ef67457-e347-4ea9-b488-32b52af9146c" containerID="53eecf6f3abbe44f7e06ac0af7e4deebaf1979eb160ea1159e05a543ac4aea01" exitCode=143 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.378657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerDied","Data":"53eecf6f3abbe44f7e06ac0af7e4deebaf1979eb160ea1159e05a543ac4aea01"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.387217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" event={"ID":"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c","Type":"ContainerStarted","Data":"5e6c083602cfba7741b67b1579c3d8552acf749b26e5ec81cd7b43ef2fefb453"} Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.423102 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerID="f4e52f5c3cc7e79bbf52ddff38d3ac4f6046c9da7c1d0d4fb161ff87adbe9315" exitCode=143 Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.428578 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerDied","Data":"f4e52f5c3cc7e79bbf52ddff38d3ac4f6046c9da7c1d0d4fb161ff87adbe9315"} Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.445577 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:43:57 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: if [ -n "placement" ]; then Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="placement" Feb 19 08:43:57 crc kubenswrapper[4780]: else Feb 19 08:43:57 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:43:57 crc kubenswrapper[4780]: fi Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:43:57 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:43:57 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:43:57 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:43:57 crc kubenswrapper[4780]: # support updates Feb 19 08:43:57 crc kubenswrapper[4780]: Feb 19 08:43:57 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.449312 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-7aae-account-create-update-7t6nb" podUID="1b8f6013-5488-4922-94d6-167007269739" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.488073 4780 scope.go:117] "RemoveContainer" containerID="fa846a4869b3acb10eb8a21741ba18e7569c9f2c7d5f843a9c97988ef168cc97" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.488645 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.495214 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7677694455-29vx8"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.507635 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.510504 4780 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.510782 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts podName:c5a38bbd-e720-458c-b364-a10afe06f51e nodeName:}" failed. No retries permitted until 2026-02-19 08:43:59.510755724 +0000 UTC m=+1382.254413173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts") pod "root-account-create-update-6ws48" (UID: "c5a38bbd-e720-458c-b364-a10afe06f51e") : configmap "openstack-cell1-scripts" not found Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.517919 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-f7dn2"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.541766 4780 scope.go:117] "RemoveContainer" containerID="3410061fc16202d0f292ee59bc88124956f67382228dfe5e24ccf6f91f2ce7cf" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.546464 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.570844 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591245 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591655 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="ovsdbserver-sb" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591667 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="ovsdbserver-sb" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591681 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="ovsdbserver-nb" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591687 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="ovsdbserver-nb" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591704 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591710 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591719 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591724 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591738 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="init" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591744 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="init" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591759 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591765 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.591774 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="dnsmasq-dns" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591782 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="dnsmasq-dns" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591941 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591963 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591974 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" containerName="ovsdbserver-nb" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591981 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" containerName="dnsmasq-dns" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591989 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" containerName="ovsdbserver-sb" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.591999 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" containerName="openstack-network-exporter" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.592657 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.594838 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.601209 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.628771 4780 scope.go:117] "RemoveContainer" containerID="a26377ef0467876a0eb785288d7dc23c271d5abc978edde8cc042649188bc179" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.737781 4780 scope.go:117] "RemoveContainer" containerID="ffd0ca69537eaf03318f20e47fc4b933b926a18775547b75f0af4f02aadae087" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.743248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.743339 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9cb4\" (UniqueName: \"kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.743508 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.743548 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data podName:0bc00934-94b1-4be3-8bf4-845ad08a453f nodeName:}" failed. No retries permitted until 2026-02-19 08:44:01.743534637 +0000 UTC m=+1384.487192086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data") pod "rabbitmq-cell1-server-0" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f") : configmap "rabbitmq-cell1-config-data" not found Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.774421 4780 scope.go:117] "RemoveContainer" containerID="9ce173c0a606bf17ab90d5b9a0e2f16da44fdff99e89dc2f4054023128c6d461" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.845082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.845239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9cb4\" (UniqueName: \"kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.847030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.867570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9cb4\" (UniqueName: \"kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4\") pod \"root-account-create-update-zkbjk\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.972841 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.977782 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.978279 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.978312 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.981359 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.984015 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5b732f-e5c7-4bec-8c32-4d16e07ce21a" path="/var/lib/kubelet/pods/0d5b732f-e5c7-4bec-8c32-4d16e07ce21a/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.984263 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.984787 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ced3cd6-55dd-41ca-a3cb-25862916cfcd" path="/var/lib/kubelet/pods/2ced3cd6-55dd-41ca-a3cb-25862916cfcd/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.985595 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9" path="/var/lib/kubelet/pods/4ac3deb5-ea1f-479c-a8a4-bbcbd48b58e9/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.986874 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e18e815-82e7-4dba-a607-1e3ba75b98f6" path="/var/lib/kubelet/pods/8e18e815-82e7-4dba-a607-1e3ba75b98f6/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.987370 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c0ef0e-b38d-48e6-b006-8e528c70ff18" path="/var/lib/kubelet/pods/b1c0ef0e-b38d-48e6-b006-8e528c70ff18/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.988270 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc29e551-efab-43d8-94d5-1c515a76dca9" path="/var/lib/kubelet/pods/bc29e551-efab-43d8-94d5-1c515a76dca9/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.988883 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d666c4-abfe-4b46-90db-1fd272d8adb4" path="/var/lib/kubelet/pods/c0d666c4-abfe-4b46-90db-1fd272d8adb4/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.990691 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c875b359-e76d-4fd0-99fb-10c8b04dfb35" path="/var/lib/kubelet/pods/c875b359-e76d-4fd0-99fb-10c8b04dfb35/volumes" Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.990775 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:43:57 crc kubenswrapper[4780]: E0219 08:43:57.990841 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:43:57 crc kubenswrapper[4780]: I0219 08:43:57.991384 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20f2167-7640-4e39-95cc-4007180d1e49" path="/var/lib/kubelet/pods/d20f2167-7640-4e39-95cc-4007180d1e49/volumes" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.021024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zkbjk" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.200503 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.295983 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:58 crc kubenswrapper[4780]: E0219 08:43:58.309444 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 is running failed: container process not found" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 08:43:58 crc kubenswrapper[4780]: E0219 08:43:58.311244 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 is running failed: container process not found" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 08:43:58 crc kubenswrapper[4780]: E0219 08:43:58.316235 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 is running failed: container process not found" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 08:43:58 crc kubenswrapper[4780]: E0219 08:43:58.316312 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerName="nova-scheduler-scheduler" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.369341 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": read tcp 10.217.0.2:41958->10.217.0.163:8776: read: connection reset by peer" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375641 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle\") pod \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data\") pod \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8g6j\" (UniqueName: \"kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j\") pod \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375773 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts\") pod \"c5a38bbd-e720-458c-b364-a10afe06f51e\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375856 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs\") pod \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs\") pod \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\" (UID: \"a7fa9686-243a-4fbe-ba17-93f9e4aa822c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.375963 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwbn\" (UniqueName: \"kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn\") pod \"c5a38bbd-e720-458c-b364-a10afe06f51e\" (UID: \"c5a38bbd-e720-458c-b364-a10afe06f51e\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.376860 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5a38bbd-e720-458c-b364-a10afe06f51e" (UID: "c5a38bbd-e720-458c-b364-a10afe06f51e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.381820 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.382293 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.386385 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.421164 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j" (OuterVolumeSpecName: "kube-api-access-t8g6j") pod "a7fa9686-243a-4fbe-ba17-93f9e4aa822c" (UID: "a7fa9686-243a-4fbe-ba17-93f9e4aa822c"). InnerVolumeSpecName "kube-api-access-t8g6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.421264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn" (OuterVolumeSpecName: "kube-api-access-8fwbn") pod "c5a38bbd-e720-458c-b364-a10afe06f51e" (UID: "c5a38bbd-e720-458c-b364-a10afe06f51e"). InnerVolumeSpecName "kube-api-access-8fwbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.450356 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data" (OuterVolumeSpecName: "config-data") pod "a7fa9686-243a-4fbe-ba17-93f9e4aa822c" (UID: "a7fa9686-243a-4fbe-ba17-93f9e4aa822c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477341 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts\") pod \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477417 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477457 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfvbk\" (UniqueName: \"kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477539 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsm22\" (UniqueName: \"kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22\") pod \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\" (UID: \"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477556 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477604 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data\") pod \"f0213271-e4da-4b8a-a732-b90d74d540ca\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477636 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle\") pod \"f0213271-e4da-4b8a-a732-b90d74d540ca\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477778 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477804 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hchc\" (UniqueName: \"kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc\") pod \"f0213271-e4da-4b8a-a732-b90d74d540ca\" (UID: \"f0213271-e4da-4b8a-a732-b90d74d540ca\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.477836 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs\") pod \"01c909ff-b464-4334-a8d6-4e7a06b88126\" (UID: \"01c909ff-b464-4334-a8d6-4e7a06b88126\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.478205 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.478221 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8g6j\" (UniqueName: \"kubernetes.io/projected/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-kube-api-access-t8g6j\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.478232 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a38bbd-e720-458c-b364-a10afe06f51e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.478241 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwbn\" (UniqueName: \"kubernetes.io/projected/c5a38bbd-e720-458c-b364-a10afe06f51e-kube-api-access-8fwbn\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.480785 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.483966 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c" (UID: "6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.484963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.486880 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.489368 4780 generic.go:334] "Generic (PLEG): container finished" podID="7ef81227-694a-4bad-b32b-809d351ec668" containerID="a452ac5cf46573cac8666add1030829b2829b3e8372bbc020c65c25f15121df8" exitCode=0 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.489402 4780 generic.go:334] "Generic (PLEG): container finished" podID="7ef81227-694a-4bad-b32b-809d351ec668" containerID="b2d583309f49dd49d17c80f25a85efc092769c4ff96637acdd9473aa868d7556" exitCode=0 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.489453 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerDied","Data":"a452ac5cf46573cac8666add1030829b2829b3e8372bbc020c65c25f15121df8"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.489483 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerDied","Data":"b2d583309f49dd49d17c80f25a85efc092769c4ff96637acdd9473aa868d7556"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.490555 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.507358 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22" (OuterVolumeSpecName: "kube-api-access-qsm22") pod "6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c" (UID: "6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c"). InnerVolumeSpecName "kube-api-access-qsm22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.510114 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.510426 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-central-agent" containerID="cri-o://1cfc9bf4b6c77d959a0f79e0b6127d18398787d6583e7ce82131bd062a4da946" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.510879 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="proxy-httpd" containerID="cri-o://11546e606f0ed19c34b297d01479e536a89c87d04b0b835ed462a9e04f3f7c79" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.510934 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="sg-core" containerID="cri-o://b1f8f92c605a74c8e4de483c71a576834ddf5781b144587755d1b657923d5477" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.510982 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-notification-agent" containerID="cri-o://b3f39502442fe07eed7a1a803c209b72c96771a7fcc5a2b9991e435b889f53cf" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.523825 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk" (OuterVolumeSpecName: "kube-api-access-vfvbk") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "kube-api-access-vfvbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.528973 4780 generic.go:334] "Generic (PLEG): container finished" podID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" exitCode=0 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.529088 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerDied","Data":"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.529429 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.530903 4780 scope.go:117] "RemoveContainer" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.529117 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"01c909ff-b464-4334-a8d6-4e7a06b88126","Type":"ContainerDied","Data":"a33af5d578843a237aa2b608fd6424c8e4369be30a381dad4bc9bc1be48ff8e5"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.545700 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.545938 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a27398f8-93a8-47a9-a517-b161dad9cc11" containerName="kube-state-metrics" containerID="cri-o://d0d0ad671ef9d17b1605ad8b7bc48a11301a49a0cc5f0ee6915c47281564ebce" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.548890 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc" (OuterVolumeSpecName: "kube-api-access-4hchc") pod "f0213271-e4da-4b8a-a732-b90d74d540ca" (UID: "f0213271-e4da-4b8a-a732-b90d74d540ca"). InnerVolumeSpecName "kube-api-access-4hchc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.548969 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7fa9686-243a-4fbe-ba17-93f9e4aa822c" (UID: "a7fa9686-243a-4fbe-ba17-93f9e4aa822c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.551347 4780 generic.go:334] "Generic (PLEG): container finished" podID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" exitCode=0 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.551417 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0213271-e4da-4b8a-a732-b90d74d540ca","Type":"ContainerDied","Data":"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.551445 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0213271-e4da-4b8a-a732-b90d74d540ca","Type":"ContainerDied","Data":"d5ad40aeceebd439882d65541cee715c8751d46f7f14f54243f844cfe7ca722a"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.551510 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.556133 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.556157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7fa9686-243a-4fbe-ba17-93f9e4aa822c","Type":"ContainerDied","Data":"b8fc3500c3da5d1d26035f618ff78cc59de71ab9d09ddfc7e46d40c5450baaeb"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.559324 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582276 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582356 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582366 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hchc\" (UniqueName: \"kubernetes.io/projected/f0213271-e4da-4b8a-a732-b90d74d540ca-kube-api-access-4hchc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582375 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582395 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582406 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c909ff-b464-4334-a8d6-4e7a06b88126-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582416 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfvbk\" (UniqueName: \"kubernetes.io/projected/01c909ff-b464-4334-a8d6-4e7a06b88126-kube-api-access-vfvbk\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582425 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582434 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c909ff-b464-4334-a8d6-4e7a06b88126-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582442 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsm22\" (UniqueName: \"kubernetes.io/projected/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c-kube-api-access-qsm22\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582847 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" event={"ID":"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4","Type":"ContainerDied","Data":"2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.582884 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7093f84e64b2eee81362aa743e384023201c46ccc391aa87360e624f188ae8" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.595526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c29-account-create-update-qnlgt" event={"ID":"06f6487b-6488-44c0-b2de-2a7f8955a46a","Type":"ContainerDied","Data":"182d87c703c0082ae00e8936c75673a631b946f2610b8077815f648e19461ae6"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.595671 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182d87c703c0082ae00e8936c75673a631b946f2610b8077815f648e19461ae6" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.599168 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.611971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ws48" event={"ID":"c5a38bbd-e720-458c-b364-a10afe06f51e","Type":"ContainerDied","Data":"dacc6c203c48012ed2cb8c6fc91c126cb7481e4fd050101f7d23e80287ff96e3"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.612065 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ws48" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.635177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" event={"ID":"6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c","Type":"ContainerDied","Data":"5e6c083602cfba7741b67b1579c3d8552acf749b26e5ec81cd7b43ef2fefb453"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.635269 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d1c0-account-create-update-6l7vh" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.640361 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.661227 4780 generic.go:334] "Generic (PLEG): container finished" podID="041edb21-581b-493e-a2f1-09e0b3559df1" containerID="7b196fe6ed67411ad242dd795c03caa7fd2feb9dbef00ff8b65a8ef8e03b4da0" exitCode=0 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.661391 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerDied","Data":"7b196fe6ed67411ad242dd795c03caa7fd2feb9dbef00ff8b65a8ef8e03b4da0"} Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.663811 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.684182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szdjs\" (UniqueName: \"kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs\") pod \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.684273 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts\") pod \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\" (UID: \"93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.684845 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.684863 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.685408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4" (UID: "93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.691608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs" (OuterVolumeSpecName: "kube-api-access-szdjs") pod "93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4" (UID: "93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4"). InnerVolumeSpecName "kube-api-access-szdjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.702046 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data" (OuterVolumeSpecName: "config-data") pod "f0213271-e4da-4b8a-a732-b90d74d540ca" (UID: "f0213271-e4da-4b8a-a732-b90d74d540ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.704046 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "a7fa9686-243a-4fbe-ba17-93f9e4aa822c" (UID: "a7fa9686-243a-4fbe-ba17-93f9e4aa822c"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.718220 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0213271-e4da-4b8a-a732-b90d74d540ca" (UID: "f0213271-e4da-4b8a-a732-b90d74d540ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.757462 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "a7fa9686-243a-4fbe-ba17-93f9e4aa822c" (UID: "a7fa9686-243a-4fbe-ba17-93f9e4aa822c"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.758668 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793046 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szdjs\" (UniqueName: \"kubernetes.io/projected/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-kube-api-access-szdjs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793077 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793086 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793095 4780 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793103 4780 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7fa9686-243a-4fbe-ba17-93f9e4aa822c-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.793113 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0213271-e4da-4b8a-a732-b90d74d540ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.805651 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.821476 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d1c0-account-create-update-6l7vh"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.874597 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.874797 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="acd7c548-a04c-4556-bcae-618ae65658de" containerName="memcached" containerID="cri-o://138859dce20becf173ad96258d71984b57487f1a412d44d9fd3ffe1deb62aa39" gracePeriod=30 Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.905587 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts\") pod \"06f6487b-6488-44c0-b2de-2a7f8955a46a\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.911228 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8d4h\" (UniqueName: \"kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h\") pod \"06f6487b-6488-44c0-b2de-2a7f8955a46a\" (UID: \"06f6487b-6488-44c0-b2de-2a7f8955a46a\") " Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.913290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06f6487b-6488-44c0-b2de-2a7f8955a46a" (UID: "06f6487b-6488-44c0-b2de-2a7f8955a46a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.928085 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.930779 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h" (OuterVolumeSpecName: "kube-api-access-d8d4h") pod "06f6487b-6488-44c0-b2de-2a7f8955a46a" (UID: "06f6487b-6488-44c0-b2de-2a7f8955a46a"). InnerVolumeSpecName "kube-api-access-d8d4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:58 crc kubenswrapper[4780]: I0219 08:43:58.952727 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3c22-account-create-update-p5jfv"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.044614 4780 scope.go:117] "RemoveContainer" containerID="f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.068785 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f6487b-6488-44c0-b2de-2a7f8955a46a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.068820 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8d4h\" (UniqueName: \"kubernetes.io/projected/06f6487b-6488-44c0-b2de-2a7f8955a46a-kube-api-access-d8d4h\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.074364 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "01c909ff-b464-4334-a8d6-4e7a06b88126" (UID: "01c909ff-b464-4334-a8d6-4e7a06b88126"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.082684 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6ws48"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.092317 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3c22-account-create-update-p5jfv"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.108337 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3c22-account-create-update-spstv"] Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109182 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109199 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109221 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109228 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109243 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="galera" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109249 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="galera" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109269 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="mysql-bootstrap" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109276 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="mysql-bootstrap" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109290 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109297 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.109305 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerName="nova-scheduler-scheduler" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109311 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerName="nova-scheduler-scheduler" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109626 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" containerName="galera" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109609 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:60650->10.217.0.199:8775: read: connection reset by peer" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109647 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109803 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" containerName="nova-scheduler-scheduler" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109890 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109961 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" containerName="mariadb-account-create-update" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.109694 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:60666->10.217.0.199:8775: read: connection reset by peer" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.112061 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.117249 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.138755 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.143213 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.143375 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3c22-account-create-update-spstv"] Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.148449 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.148503 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.171076 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfxk\" (UniqueName: \"kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.171219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.171272 4780 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c909ff-b464-4334-a8d6-4e7a06b88126-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.212396 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tdzz8"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.250481 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tdzz8"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.275327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.275447 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfxk\" (UniqueName: \"kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.277769 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.277823 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:43:59.777807424 +0000 UTC m=+1382.521464873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : configmap "openstack-scripts" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.287115 4780 projected.go:194] Error preparing data for projected volume kube-api-access-nkfxk for pod openstack/keystone-3c22-account-create-update-spstv: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.287209 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:43:59.787190217 +0000 UTC m=+1382.530847666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nkfxk" (UniqueName: "kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.296187 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sfs2h"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.340705 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.340933 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-68c564b849-pqj6g" podUID="e3467470-e6f9-49c1-b49f-8cea159e5af9" containerName="keystone-api" containerID="cri-o://bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918" gracePeriod=30 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.397669 4780 scope.go:117] "RemoveContainer" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.402763 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4\": container with ID starting with 5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4 not found: ID does not exist" containerID="5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.402806 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4"} err="failed to get container status \"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4\": rpc error: code = NotFound desc = could not find container \"5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4\": container with ID starting with 5ac72fd54e749a5b41ae7a387cfc91840097cd05acb4c79e817857962f5131b4 not found: ID does not exist" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.402835 4780 scope.go:117] "RemoveContainer" containerID="f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.403217 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67\": container with ID starting with f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67 not found: ID does not exist" containerID="f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.403238 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67"} err="failed to get container status \"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67\": rpc error: code = NotFound desc = could not find container \"f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67\": container with ID starting with f557ff21c0ad53a53eccd7520d9acddd0bbdf15f3f619376335c5c31e615dc67 not found: ID does not exist" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.403257 4780 scope.go:117] "RemoveContainer" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.407240 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sfs2h"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.468641 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.497790 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3c22-account-create-update-spstv"] Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.498454 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nkfxk operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-3c22-account-create-update-spstv" podUID="63fd5f2e-8133-4c0b-b15c-006db1f17fed" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.499318 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-w7s5g"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.507675 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.532492 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.532588 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-w7s5g"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583690 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583747 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583808 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583863 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583887 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583937 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583968 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.583986 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584007 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584073 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584107 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584149 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584171 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584213 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584250 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbbcc\" (UniqueName: \"kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc\") pod \"041edb21-581b-493e-a2f1-09e0b3559df1\" (UID: \"041edb21-581b-493e-a2f1-09e0b3559df1\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.584272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwglk\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk\") pod \"7ef81227-694a-4bad-b32b-809d351ec668\" (UID: \"7ef81227-694a-4bad-b32b-809d351ec668\") " Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.607034 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.609150 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs" (OuterVolumeSpecName: "logs") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.610113 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.611048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.614257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.615415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk" (OuterVolumeSpecName: "kube-api-access-jwglk") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "kube-api-access-jwglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.627665 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.636263 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.643059 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc" (OuterVolumeSpecName: "kube-api-access-pbbcc") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "kube-api-access-pbbcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.646235 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.648816 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.648925 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.656411 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts" (OuterVolumeSpecName: "scripts") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.665202 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.669171 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.673182 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691162 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/041edb21-581b-493e-a2f1-09e0b3559df1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691212 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbbcc\" (UniqueName: \"kubernetes.io/projected/041edb21-581b-493e-a2f1-09e0b3559df1-kube-api-access-pbbcc\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691223 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwglk\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-kube-api-access-jwglk\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691232 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691240 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691250 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef81227-694a-4bad-b32b-809d351ec668-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691276 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/041edb21-581b-493e-a2f1-09e0b3559df1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691286 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.691294 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ef81227-694a-4bad-b32b-809d351ec668-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.707617 4780 generic.go:334] "Generic (PLEG): container finished" podID="a27398f8-93a8-47a9-a517-b161dad9cc11" containerID="d0d0ad671ef9d17b1605ad8b7bc48a11301a49a0cc5f0ee6915c47281564ebce" exitCode=2 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.707712 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a27398f8-93a8-47a9-a517-b161dad9cc11","Type":"ContainerDied","Data":"d0d0ad671ef9d17b1605ad8b7bc48a11301a49a0cc5f0ee6915c47281564ebce"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.710183 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-565f58cc6f-vwtvf" event={"ID":"7ef81227-694a-4bad-b32b-809d351ec668","Type":"ContainerDied","Data":"41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.710308 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-565f58cc6f-vwtvf" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.714865 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerID="11546e606f0ed19c34b297d01479e536a89c87d04b0b835ed462a9e04f3f7c79" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.714899 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerID="b1f8f92c605a74c8e4de483c71a576834ddf5781b144587755d1b657923d5477" exitCode=2 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.714915 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerID="b3f39502442fe07eed7a1a803c209b72c96771a7fcc5a2b9991e435b889f53cf" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.714929 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerID="1cfc9bf4b6c77d959a0f79e0b6127d18398787d6583e7ce82131bd062a4da946" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.714979 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerDied","Data":"11546e606f0ed19c34b297d01479e536a89c87d04b0b835ed462a9e04f3f7c79"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.715003 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerDied","Data":"b1f8f92c605a74c8e4de483c71a576834ddf5781b144587755d1b657923d5477"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.715022 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerDied","Data":"b3f39502442fe07eed7a1a803c209b72c96771a7fcc5a2b9991e435b889f53cf"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.715040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerDied","Data":"1cfc9bf4b6c77d959a0f79e0b6127d18398787d6583e7ce82131bd062a4da946"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.717446 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"041edb21-581b-493e-a2f1-09e0b3559df1","Type":"ContainerDied","Data":"4cab19d07a29637de3732196fc0dfabff1657564c847d3f62e1dec3bafb7095a"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.717538 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.720501 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerID="ba25906e3f30c93cfd93251995fbe6b9adb85d14c0ee7551594e1bd77644bf06" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.720558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerDied","Data":"ba25906e3f30c93cfd93251995fbe6b9adb85d14c0ee7551594e1bd77644bf06"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.745251 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.746311 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.760138 4780 scope.go:117] "RemoveContainer" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.769052 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964\": container with ID starting with 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 not found: ID does not exist" containerID="641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.769091 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964"} err="failed to get container status \"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964\": rpc error: code = NotFound desc = could not find container \"641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964\": container with ID starting with 641241f035bd01c8050a2528f6fa7e838a90979194824e8d0e8ec6aa414d0964 not found: ID does not exist" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.769117 4780 scope.go:117] "RemoveContainer" containerID="8054af60ddd0b374a2c0f62a832f4a309e45631fbcb23191918b751a178136ea" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.783542 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerID="470d613c3f2933cabeb420246069bef8c1516a00e6cebf54fd8f45fec126403e" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.783627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerDied","Data":"470d613c3f2933cabeb420246069bef8c1516a00e6cebf54fd8f45fec126403e"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.792407 4780 generic.go:334] "Generic (PLEG): container finished" podID="acd7c548-a04c-4556-bcae-618ae65658de" containerID="138859dce20becf173ad96258d71984b57487f1a412d44d9fd3ffe1deb62aa39" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.792496 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acd7c548-a04c-4556-bcae-618ae65658de","Type":"ContainerDied","Data":"138859dce20becf173ad96258d71984b57487f1a412d44d9fd3ffe1deb62aa39"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.793845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfxk\" (UniqueName: \"kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.793993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.794148 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.794204 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:44:00.794186313 +0000 UTC m=+1383.537843762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : configmap "openstack-scripts" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.807808 4780 projected.go:194] Error preparing data for projected volume kube-api-access-nkfxk for pod openstack/keystone-3c22-account-create-update-spstv: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:43:59 crc kubenswrapper[4780]: E0219 08:43:59.807880 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:44:00.807861183 +0000 UTC m=+1383.551518632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nkfxk" (UniqueName: "kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.824379 4780 generic.go:334] "Generic (PLEG): container finished" podID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerID="6cc78ab8f7b9e9df271b1241208a5165a0e1b133172de580b0941a07a1cbbb55" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.824458 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerDied","Data":"6cc78ab8f7b9e9df271b1241208a5165a0e1b133172de580b0941a07a1cbbb55"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.826666 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ef67457-e347-4ea9-b488-32b52af9146c" containerID="516c8de3c33c0337fd76fb32a1510070e91e8a75deedbf4866e716ba08c4c8aa" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.826726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerDied","Data":"516c8de3c33c0337fd76fb32a1510070e91e8a75deedbf4866e716ba08c4c8aa"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.848614 4780 generic.go:334] "Generic (PLEG): container finished" podID="80168270-a6db-4ef2-833b-5d2eb2781779" containerID="c021cdf9bb1cdb9fed0336ae04d70630c86309a60b2593d61c63d06c8b046dcd" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.848726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerDied","Data":"c021cdf9bb1cdb9fed0336ae04d70630c86309a60b2593d61c63d06c8b046dcd"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.855057 4780 generic.go:334] "Generic (PLEG): container finished" podID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerID="bcacaffefa0805038ec68a239723691428dbbee367f236e1e7e7b362dd644e5e" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.855108 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerDied","Data":"bcacaffefa0805038ec68a239723691428dbbee367f236e1e7e7b362dd644e5e"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.857803 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerID="27850fd9ac7fe009c176f0a9206a0fd99a8d233881001985a4dcd3b476a6ee51" exitCode=0 Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.857877 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-623f-account-create-update-5w2cr" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.858634 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c29-account-create-update-qnlgt" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.860304 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.861066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerDied","Data":"27850fd9ac7fe009c176f0a9206a0fd99a8d233881001985a4dcd3b476a6ee51"} Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.903322 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.919031 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.937275 4780 scope.go:117] "RemoveContainer" containerID="473462e223ca6549c8137f0111bdd6185533438f8cbe7f0b6f647848e628f02b" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.940007 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.959801 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c909ff-b464-4334-a8d6-4e7a06b88126" path="/var/lib/kubelet/pods/01c909ff-b464-4334-a8d6-4e7a06b88126/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.961235 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2196ec7a-fea4-422a-8c7d-0350b6dd19c0" path="/var/lib/kubelet/pods/2196ec7a-fea4-422a-8c7d-0350b6dd19c0/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.962270 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad6a771-c42f-4893-9d53-488723d532b1" path="/var/lib/kubelet/pods/5ad6a771-c42f-4893-9d53-488723d532b1/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.963647 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c" path="/var/lib/kubelet/pods/6fdb5e23-ef57-4b5a-ac3f-fb1663e5c82c/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.964182 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fa9686-243a-4fbe-ba17-93f9e4aa822c" path="/var/lib/kubelet/pods/a7fa9686-243a-4fbe-ba17-93f9e4aa822c/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.966509 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4821312-0274-4930-bd0a-d6438b1e3e56" path="/var/lib/kubelet/pods/b4821312-0274-4930-bd0a-d6438b1e3e56/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.966999 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a38bbd-e720-458c-b364-a10afe06f51e" path="/var/lib/kubelet/pods/c5a38bbd-e720-458c-b364-a10afe06f51e/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.967492 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e5a977-7deb-4a69-b388-8050af25ae68" path="/var/lib/kubelet/pods/c6e5a977-7deb-4a69-b388-8050af25ae68/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.968434 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0213271-e4da-4b8a-a732-b90d74d540ca" path="/var/lib/kubelet/pods/f0213271-e4da-4b8a-a732-b90d74d540ca/volumes" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.991373 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data" (OuterVolumeSpecName: "config-data") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:43:59 crc kubenswrapper[4780]: I0219 08:43:59.994589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.003573 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.003598 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.003607 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.003619 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.003629 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.123526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.123584 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ef81227-694a-4bad-b32b-809d351ec668" (UID: "7ef81227-694a-4bad-b32b-809d351ec668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.135535 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data" (OuterVolumeSpecName: "config-data") pod "041edb21-581b-493e-a2f1-09e0b3559df1" (UID: "041edb21-581b-493e-a2f1-09e0b3559df1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.162074 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="galera" containerID="cri-o://a465db40f9eca8dcae409a58d79d3d9cd987c42bad7e6a4443d618b97692b1e5" gracePeriod=30 Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.211457 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041edb21-581b-493e-a2f1-09e0b3559df1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.211504 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.211519 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef81227-694a-4bad-b32b-809d351ec668-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.286634 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:44:00 crc kubenswrapper[4780]: W0219 08:44:00.298368 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9adadc01_71e9_4ef1_a02d_4aa566032209.slice/crio-c03c5a5cb4239089e06d7d4f0271245a9122ff0018dbc2776a18d063409c1fcc WatchSource:0}: Error finding container c03c5a5cb4239089e06d7d4f0271245a9122ff0018dbc2776a18d063409c1fcc: Status 404 returned error can't find the container with id c03c5a5cb4239089e06d7d4f0271245a9122ff0018dbc2776a18d063409c1fcc Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.300935 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.310646 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869 is running failed: container process not found" containerID="db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.320395 4780 scope.go:117] "RemoveContainer" containerID="a452ac5cf46573cac8666add1030829b2829b3e8372bbc020c65c25f15121df8" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.320568 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869 is running failed: container process not found" containerID="db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.324225 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869 is running failed: container process not found" containerID="db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.324356 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerName="nova-cell0-conductor-conductor" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.334154 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.351165 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.352306 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.362789 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.364407 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.367244 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 08:44:00 crc kubenswrapper[4780]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: if [ -n "" ]; then Feb 19 08:44:00 crc kubenswrapper[4780]: GRANT_DATABASE="" Feb 19 08:44:00 crc kubenswrapper[4780]: else Feb 19 08:44:00 crc kubenswrapper[4780]: GRANT_DATABASE="*" Feb 19 08:44:00 crc kubenswrapper[4780]: fi Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: # going for maximum compatibility here: Feb 19 08:44:00 crc kubenswrapper[4780]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 08:44:00 crc kubenswrapper[4780]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 08:44:00 crc kubenswrapper[4780]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 08:44:00 crc kubenswrapper[4780]: # support updates Feb 19 08:44:00 crc kubenswrapper[4780]: Feb 19 08:44:00 crc kubenswrapper[4780]: $MYSQL_CMD < logger="UnhandledError" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.367587 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-623f-account-create-update-5w2cr"] Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.368561 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zkbjk" podUID="9adadc01-71e9-4ef1-a02d-4aa566032209" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.376405 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.383963 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.393705 4780 scope.go:117] "RemoveContainer" containerID="b2d583309f49dd49d17c80f25a85efc092769c4ff96637acdd9473aa868d7556" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.402548 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419409 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzcvv\" (UniqueName: \"kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv\") pod \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419465 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pr56\" (UniqueName: \"kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56\") pod \"a27398f8-93a8-47a9-a517-b161dad9cc11\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419534 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config\") pod \"a27398f8-93a8-47a9-a517-b161dad9cc11\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts\") pod \"1b8f6013-5488-4922-94d6-167007269739\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419629 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419668 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419861 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419905 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts\") pod \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\" (UID: \"c180e0b2-79c3-49b7-bac3-f868aeebd2cc\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419932 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419965 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzsl\" (UniqueName: \"kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.419992 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs\") pod \"a27398f8-93a8-47a9-a517-b161dad9cc11\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420042 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420069 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs\") pod \"80168270-a6db-4ef2-833b-5d2eb2781779\" (UID: \"80168270-a6db-4ef2-833b-5d2eb2781779\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420096 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle\") pod \"a27398f8-93a8-47a9-a517-b161dad9cc11\" (UID: \"a27398f8-93a8-47a9-a517-b161dad9cc11\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420153 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9sxj\" (UniqueName: \"kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj\") pod \"1b8f6013-5488-4922-94d6-167007269739\" (UID: \"1b8f6013-5488-4922-94d6-167007269739\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420204 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7fxf\" (UniqueName: \"kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf\") pod \"9b47d55e-fb13-4f2f-8708-a68119e39b60\" (UID: \"9b47d55e-fb13-4f2f-8708-a68119e39b60\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b8f6013-5488-4922-94d6-167007269739" (UID: "1b8f6013-5488-4922-94d6-167007269739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.420936 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b8f6013-5488-4922-94d6-167007269739-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.423709 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs" (OuterVolumeSpecName: "logs") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.426821 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.431736 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs" (OuterVolumeSpecName: "logs") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.435259 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv" (OuterVolumeSpecName: "kube-api-access-hzcvv") pod "c180e0b2-79c3-49b7-bac3-f868aeebd2cc" (UID: "c180e0b2-79c3-49b7-bac3-f868aeebd2cc"). InnerVolumeSpecName "kube-api-access-hzcvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.439541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf" (OuterVolumeSpecName: "kube-api-access-q7fxf") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "kube-api-access-q7fxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.439836 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl" (OuterVolumeSpecName: "kube-api-access-hrzsl") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "kube-api-access-hrzsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.451372 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.452660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj" (OuterVolumeSpecName: "kube-api-access-g9sxj") pod "1b8f6013-5488-4922-94d6-167007269739" (UID: "1b8f6013-5488-4922-94d6-167007269739"). InnerVolumeSpecName "kube-api-access-g9sxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.455439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c180e0b2-79c3-49b7-bac3-f868aeebd2cc" (UID: "c180e0b2-79c3-49b7-bac3-f868aeebd2cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.456449 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts" (OuterVolumeSpecName: "scripts") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.456503 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.467583 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-565f58cc6f-vwtvf"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.477172 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56" (OuterVolumeSpecName: "kube-api-access-2pr56") pod "a27398f8-93a8-47a9-a517-b161dad9cc11" (UID: "a27398f8-93a8-47a9-a517-b161dad9cc11"). InnerVolumeSpecName "kube-api-access-2pr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.487547 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.489821 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.500590 4780 scope.go:117] "RemoveContainer" containerID="7b196fe6ed67411ad242dd795c03caa7fd2feb9dbef00ff8b65a8ef8e03b4da0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.506152 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.514408 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9c29-account-create-update-qnlgt"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522470 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522491 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522512 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle\") pod \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522533 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data\") pod \"acd7c548-a04c-4556-bcae-618ae65658de\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config\") pod \"acd7c548-a04c-4556-bcae-618ae65658de\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522566 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522873 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data\") pod \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522906 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx658\" (UniqueName: \"kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658\") pod \"acd7c548-a04c-4556-bcae-618ae65658de\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522924 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522946 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.522962 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523002 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523029 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26bp\" (UniqueName: \"kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8mlc\" (UniqueName: \"kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc\") pod \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523073 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523100 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs\") pod \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523236 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523267 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523310 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523326 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zg7m\" (UniqueName: \"kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzzh6\" (UniqueName: \"kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523397 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs\") pod \"acd7c548-a04c-4556-bcae-618ae65658de\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523460 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle\") pod \"acd7c548-a04c-4556-bcae-618ae65658de\" (UID: \"acd7c548-a04c-4556-bcae-618ae65658de\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523479 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523514 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523530 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0a69047c-4c8d-4b93-82b3-005a9e83f686\" (UID: \"0a69047c-4c8d-4b93-82b3-005a9e83f686\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs\") pod \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\" (UID: \"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523573 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts\") pod \"fa951d8d-6e05-4995-9a80-fb0808216e61\" (UID: \"fa951d8d-6e05-4995-9a80-fb0808216e61\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.523609 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom\") pod \"4ef67457-e347-4ea9-b488-32b52af9146c\" (UID: \"4ef67457-e347-4ea9-b488-32b52af9146c\") " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525189 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525210 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzsl\" (UniqueName: \"kubernetes.io/projected/80168270-a6db-4ef2-833b-5d2eb2781779-kube-api-access-hrzsl\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525222 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9sxj\" (UniqueName: \"kubernetes.io/projected/1b8f6013-5488-4922-94d6-167007269739-kube-api-access-g9sxj\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525232 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7fxf\" (UniqueName: \"kubernetes.io/projected/9b47d55e-fb13-4f2f-8708-a68119e39b60-kube-api-access-q7fxf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525241 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzcvv\" (UniqueName: \"kubernetes.io/projected/c180e0b2-79c3-49b7-bac3-f868aeebd2cc-kube-api-access-hzcvv\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525251 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525260 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pr56\" (UniqueName: \"kubernetes.io/projected/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-api-access-2pr56\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525269 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80168270-a6db-4ef2-833b-5d2eb2781779-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.525278 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b47d55e-fb13-4f2f-8708-a68119e39b60-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.530064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs" (OuterVolumeSpecName: "logs") pod "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" (UID: "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.533377 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs" (OuterVolumeSpecName: "logs") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.533854 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs" (OuterVolumeSpecName: "logs") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.534211 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.538500 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.538835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "acd7c548-a04c-4556-bcae-618ae65658de" (UID: "acd7c548-a04c-4556-bcae-618ae65658de"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.539261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs" (OuterVolumeSpecName: "logs") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.539664 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data" (OuterVolumeSpecName: "config-data") pod "acd7c548-a04c-4556-bcae-618ae65658de" (UID: "acd7c548-a04c-4556-bcae-618ae65658de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.541264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.549871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts" (OuterVolumeSpecName: "scripts") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.551156 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f6487b_6488_44c0_b2de_2a7f8955a46a.slice/crio-182d87c703c0082ae00e8936c75673a631b946f2610b8077815f648e19461ae6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f6487b_6488_44c0_b2de_2a7f8955a46a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51a5891a_27e3_404a_b8c8_51c2399e8903.slice/crio-db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51a5891a_27e3_404a_b8c8_51c2399e8903.slice/crio-conmon-db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041edb21_581b_493e_a2f1_09e0b3559df1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c4ea3e_9190_44e9_8cd1_fa2ecce7e5d4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ef81227_694a_4bad_b32b_809d351ec668.slice/crio-41bda972e55b925a86e62f6b1b59a62ad4919b285a79ece69ad219fbb1476b3d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ef81227_694a_4bad_b32b_809d351ec668.slice\": RecentStats: unable to find data in memory cache]" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.551376 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6" (OuterVolumeSpecName: "kube-api-access-fzzh6") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "kube-api-access-fzzh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.552183 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.552620 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m" (OuterVolumeSpecName: "kube-api-access-7zg7m") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "kube-api-access-7zg7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.556877 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.558148 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data" (OuterVolumeSpecName: "config-data") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.566275 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp" (OuterVolumeSpecName: "kube-api-access-s26bp") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "kube-api-access-s26bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.566308 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts" (OuterVolumeSpecName: "scripts") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.577709 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.578256 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.581567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658" (OuterVolumeSpecName: "kube-api-access-xx658") pod "acd7c548-a04c-4556-bcae-618ae65658de" (UID: "acd7c548-a04c-4556-bcae-618ae65658de"). InnerVolumeSpecName "kube-api-access-xx658". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.583668 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc" (OuterVolumeSpecName: "kube-api-access-j8mlc") pod "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" (UID: "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7"). InnerVolumeSpecName "kube-api-access-j8mlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.631396 4780 scope.go:117] "RemoveContainer" containerID="9b449827662b36d6c9c93dd1e51b9613703943aec7408ea0336acf021bd59ad8" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632886 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632921 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632937 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632947 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632957 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632968 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef67457-e347-4ea9-b488-32b52af9146c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632979 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632989 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.632999 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acd7c548-a04c-4556-bcae-618ae65658de-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633008 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633019 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx658\" (UniqueName: \"kubernetes.io/projected/acd7c548-a04c-4556-bcae-618ae65658de-kube-api-access-xx658\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633029 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa951d8d-6e05-4995-9a80-fb0808216e61-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633041 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s26bp\" (UniqueName: \"kubernetes.io/projected/0a69047c-4c8d-4b93-82b3-005a9e83f686-kube-api-access-s26bp\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633052 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8mlc\" (UniqueName: \"kubernetes.io/projected/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-kube-api-access-j8mlc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633061 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633070 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633079 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zg7m\" (UniqueName: \"kubernetes.io/projected/fa951d8d-6e05-4995-9a80-fb0808216e61-kube-api-access-7zg7m\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633090 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzh6\" (UniqueName: \"kubernetes.io/projected/4ef67457-e347-4ea9-b488-32b52af9146c-kube-api-access-fzzh6\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.633102 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a69047c-4c8d-4b93-82b3-005a9e83f686-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.645946 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data" (OuterVolumeSpecName: "config-data") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.682796 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.698976 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.700721 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.709298 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.731084 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acd7c548-a04c-4556-bcae-618ae65658de" (UID: "acd7c548-a04c-4556-bcae-618ae65658de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734831 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734856 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734866 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734877 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734888 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.734897 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.734956 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.734999 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data podName:b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d nodeName:}" failed. No retries permitted until 2026-02-19 08:44:08.734986046 +0000 UTC m=+1391.478643495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data") pod "rabbitmq-server-0" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d") : configmap "rabbitmq-config-data" not found Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.737358 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.757605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "a27398f8-93a8-47a9-a517-b161dad9cc11" (UID: "a27398f8-93a8-47a9-a517-b161dad9cc11"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.761515 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.769045 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.781248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b47d55e-fb13-4f2f-8708-a68119e39b60" (UID: "9b47d55e-fb13-4f2f-8708-a68119e39b60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.788619 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "a27398f8-93a8-47a9-a517-b161dad9cc11" (UID: "a27398f8-93a8-47a9-a517-b161dad9cc11"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.805101 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" (UID: "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.807389 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a27398f8-93a8-47a9-a517-b161dad9cc11" (UID: "a27398f8-93a8-47a9-a517-b161dad9cc11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.836661 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfxk\" (UniqueName: \"kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk\") pod \"keystone-3c22-account-create-update-spstv\" (UID: \"63fd5f2e-8133-4c0b-b15c-006db1f17fed\") " pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837609 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837624 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837657 4780 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837670 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837684 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837693 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b47d55e-fb13-4f2f-8708-a68119e39b60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837701 4780 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a27398f8-93a8-47a9-a517-b161dad9cc11-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.837710 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.837876 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.837989 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:44:02.837962295 +0000 UTC m=+1385.581619744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : configmap "openstack-scripts" not found Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.838975 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data" (OuterVolumeSpecName: "config-data") pod "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" (UID: "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.841271 4780 projected.go:194] Error preparing data for projected volume kube-api-access-nkfxk for pod openstack/keystone-3c22-account-create-update-spstv: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:44:00 crc kubenswrapper[4780]: E0219 08:44:00.841360 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk podName:63fd5f2e-8133-4c0b-b15c-006db1f17fed nodeName:}" failed. No retries permitted until 2026-02-19 08:44:02.841324868 +0000 UTC m=+1385.584982317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nkfxk" (UniqueName: "kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk") pod "keystone-3c22-account-create-update-spstv" (UID: "63fd5f2e-8133-4c0b-b15c-006db1f17fed") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.866490 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.920692 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f929-account-create-update-7rdjx" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.920729 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f929-account-create-update-7rdjx" event={"ID":"c180e0b2-79c3-49b7-bac3-f868aeebd2cc","Type":"ContainerDied","Data":"1c14ad179e653523b597c68e8e4958feb89ca479da7d4cc40224fcdd41bd5879"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.931003 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d56d997b-gx5gk" event={"ID":"80168270-a6db-4ef2-833b-5d2eb2781779","Type":"ContainerDied","Data":"f5ef85bffb1dd240eec1f579c27150411a5232b23cf1f578d27c2de676195cbf"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.931068 4780 scope.go:117] "RemoveContainer" containerID="c021cdf9bb1cdb9fed0336ae04d70630c86309a60b2593d61c63d06c8b046dcd" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.934389 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d56d997b-gx5gk" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.940020 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.940039 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.942221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.942940 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acd7c548-a04c-4556-bcae-618ae65658de","Type":"ContainerDied","Data":"480a47c2239a2c273902d5f0f5119cdda9abaeb197d1a746aaaeff595a40b947"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.943012 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.945467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa951d8d-6e05-4995-9a80-fb0808216e61","Type":"ContainerDied","Data":"370598fe716fe885844bbff003aa132b19df1be1ff9b55be0ec1fa7bdd383e79"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.945514 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.975376 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data" (OuterVolumeSpecName: "config-data") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.975477 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a69047c-4c8d-4b93-82b3-005a9e83f686","Type":"ContainerDied","Data":"9a41684c0474d3e51271bf7ca643fee47573946e045e610d6bf81b446e427ef7"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.975767 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.983381 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" event={"ID":"4ef67457-e347-4ea9-b488-32b52af9146c","Type":"ContainerDied","Data":"5b81cf03233af003a848b04e070d14fda31b593a206c757e3ba3e3d686e8c95f"} Feb 19 08:44:00 crc kubenswrapper[4780]: I0219 08:44:00.985419 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f57f4f6f6-8lqlt" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.000977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee75a5b9-0f5b-4db0-ab84-e4848bf382a7","Type":"ContainerDied","Data":"c915b48677d2d967daefde99c7f3e2c8ccadc1685b22792fe92915b332d410a7"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.001321 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.038650 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zkbjk" event={"ID":"9adadc01-71e9-4ef1-a02d-4aa566032209","Type":"ContainerStarted","Data":"c03c5a5cb4239089e06d7d4f0271245a9122ff0018dbc2776a18d063409c1fcc"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.046035 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.046073 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.047892 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a27398f8-93a8-47a9-a517-b161dad9cc11","Type":"ContainerDied","Data":"a70aab4d7cef4926970f5bcbc8df9a9712c0b0107361ea53c5b644c4fa30626b"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.047993 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.053637 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84f494b65f-swr5f" event={"ID":"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba","Type":"ContainerDied","Data":"a301fe0eb3978ef5d7d997f3393c426041a650d2d986d9715d453c605deab566"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.054964 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a301fe0eb3978ef5d7d997f3393c426041a650d2d986d9715d453c605deab566" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.058760 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" (UID: "ee75a5b9-0f5b-4db0-ab84-e4848bf382a7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.069508 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b47d55e-fb13-4f2f-8708-a68119e39b60","Type":"ContainerDied","Data":"c14e7e2efabfd08179963d8420fd6ae68c4fdabeae619e11367cf10d208512cc"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.069604 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.147084 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.148403 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6bee84d-2233-4962-94e0-bfe3c8f26496","Type":"ContainerDied","Data":"918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.148449 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918b2d42e69b07275cd6b8babf74e31c4a2b4665fddc1c165832b401f2fb924f" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.149925 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.151181 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7aae-account-create-update-7t6nb" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.152145 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7aae-account-create-update-7t6nb" event={"ID":"1b8f6013-5488-4922-94d6-167007269739","Type":"ContainerDied","Data":"51fb3873321e3b7595a0316cdf20efd4dddaa4f6100f7d2c62bd9f4a3a5757bb"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.153708 4780 generic.go:334] "Generic (PLEG): container finished" podID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerID="db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" exitCode=0 Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.153762 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51a5891a-27e3-404a-b8c8-51c2399e8903","Type":"ContainerDied","Data":"db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.158291 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.158362 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "acd7c548-a04c-4556-bcae-618ae65658de" (UID: "acd7c548-a04c-4556-bcae-618ae65658de"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.159542 4780 generic.go:334] "Generic (PLEG): container finished" podID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerID="7edbc265ca1fca9fec89c4aa613f291e13e679212a1411e4d048f4165e32dd71" exitCode=0 Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.159620 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerDied","Data":"7edbc265ca1fca9fec89c4aa613f291e13e679212a1411e4d048f4165e32dd71"} Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.165040 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c22-account-create-update-spstv" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.171729 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0a69047c-4c8d-4b93-82b3-005a9e83f686" (UID: "0a69047c-4c8d-4b93-82b3-005a9e83f686"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.189368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data" (OuterVolumeSpecName: "config-data") pod "fa951d8d-6e05-4995-9a80-fb0808216e61" (UID: "fa951d8d-6e05-4995-9a80-fb0808216e61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.199345 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data" (OuterVolumeSpecName: "config-data") pod "4ef67457-e347-4ea9-b488-32b52af9146c" (UID: "4ef67457-e347-4ea9-b488-32b52af9146c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.204828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248390 4780 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd7c548-a04c-4556-bcae-618ae65658de-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248835 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248847 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248856 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef67457-e347-4ea9-b488-32b52af9146c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248864 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa951d8d-6e05-4995-9a80-fb0808216e61-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248873 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a69047c-4c8d-4b93-82b3-005a9e83f686-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.248882 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.267471 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "80168270-a6db-4ef2-833b-5d2eb2781779" (UID: "80168270-a6db-4ef2-833b-5d2eb2781779"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.327881 4780 scope.go:117] "RemoveContainer" containerID="4a2deac91a19b42884cc5219eea805083e84f812e11c4c1c5d92ca54a8646ebf" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.332186 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351278 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5wdf\" (UniqueName: \"kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf\") pod \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351324 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom\") pod \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351368 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs\") pod \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351400 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data\") pod \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351512 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle\") pod \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\" (UID: \"5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.351901 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs" (OuterVolumeSpecName: "logs") pod "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" (UID: "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.352370 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80168270-a6db-4ef2-833b-5d2eb2781779-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.352424 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.357428 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.358899 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf" (OuterVolumeSpecName: "kube-api-access-t5wdf") pod "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" (UID: "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba"). InnerVolumeSpecName "kube-api-access-t5wdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.363896 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" (UID: "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.384844 4780 scope.go:117] "RemoveContainer" containerID="138859dce20becf173ad96258d71984b57487f1a412d44d9fd3ffe1deb62aa39" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.406714 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data" (OuterVolumeSpecName: "config-data") pod "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" (UID: "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.414408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" (UID: "5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.433186 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.448097 4780 scope.go:117] "RemoveContainer" containerID="bcacaffefa0805038ec68a239723691428dbbee367f236e1e7e7b362dd644e5e" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.452981 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf72j\" (UniqueName: \"kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j\") pod \"f650c235-dc2c-4737-9624-e2ea4d9ed761\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453034 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453074 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data\") pod \"f650c235-dc2c-4737-9624-e2ea4d9ed761\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453107 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom\") pod \"f650c235-dc2c-4737-9624-e2ea4d9ed761\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453291 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cghrf\" (UniqueName: \"kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453332 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs\") pod \"f650c235-dc2c-4737-9624-e2ea4d9ed761\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453512 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle\") pod \"f650c235-dc2c-4737-9624-e2ea4d9ed761\" (UID: \"f650c235-dc2c-4737-9624-e2ea4d9ed761\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.453586 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.454044 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5wdf\" (UniqueName: \"kubernetes.io/projected/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-kube-api-access-t5wdf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.454057 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.454066 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.454075 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.457168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.457393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.457905 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs" (OuterVolumeSpecName: "logs") pod "f650c235-dc2c-4737-9624-e2ea4d9ed761" (UID: "f650c235-dc2c-4737-9624-e2ea4d9ed761"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.460098 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j" (OuterVolumeSpecName: "kube-api-access-zf72j") pod "f650c235-dc2c-4737-9624-e2ea4d9ed761" (UID: "f650c235-dc2c-4737-9624-e2ea4d9ed761"). InnerVolumeSpecName "kube-api-access-zf72j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.463354 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f650c235-dc2c-4737-9624-e2ea4d9ed761" (UID: "f650c235-dc2c-4737-9624-e2ea4d9ed761"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.463997 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.470114 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zkbjk" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.470585 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts" (OuterVolumeSpecName: "scripts") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.473916 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.481886 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf" (OuterVolumeSpecName: "kube-api-access-cghrf") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "kube-api-access-cghrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.495231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f650c235-dc2c-4737-9624-e2ea4d9ed761" (UID: "f650c235-dc2c-4737-9624-e2ea4d9ed761"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.495358 4780 scope.go:117] "RemoveContainer" containerID="4162407cdf5682d804be4b4717c823043ddf3ab7e9943293c605618e3930edf7" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.503082 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.519001 4780 scope.go:117] "RemoveContainer" containerID="470d613c3f2933cabeb420246069bef8c1516a00e6cebf54fd8f45fec126403e" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.522757 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data" (OuterVolumeSpecName: "config-data") pod "f650c235-dc2c-4737-9624-e2ea4d9ed761" (UID: "f650c235-dc2c-4737-9624-e2ea4d9ed761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.530765 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3c22-account-create-update-spstv"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.544758 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.555668 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3c22-account-create-update-spstv"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.587065 4780 scope.go:117] "RemoveContainer" containerID="511ae1a6b95e07069083114a3d15f66169e2683396feb32e7d98594881f3165c" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.587439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590288 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9cb4\" (UniqueName: \"kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4\") pod \"9adadc01-71e9-4ef1-a02d-4aa566032209\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590567 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") pod \"a6bee84d-2233-4962-94e0-bfe3c8f26496\" (UID: \"a6bee84d-2233-4962-94e0-bfe3c8f26496\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590653 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7w7n\" (UniqueName: \"kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n\") pod \"51a5891a-27e3-404a-b8c8-51c2399e8903\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590706 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle\") pod \"51a5891a-27e3-404a-b8c8-51c2399e8903\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590766 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data\") pod \"51a5891a-27e3-404a-b8c8-51c2399e8903\" (UID: \"51a5891a-27e3-404a-b8c8-51c2399e8903\") " Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.590809 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts\") pod \"9adadc01-71e9-4ef1-a02d-4aa566032209\" (UID: \"9adadc01-71e9-4ef1-a02d-4aa566032209\") " Feb 19 08:44:01 crc kubenswrapper[4780]: W0219 08:44:01.593870 4780 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a6bee84d-2233-4962-94e0-bfe3c8f26496/volumes/kubernetes.io~secret/ceilometer-tls-certs Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594144 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594442 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkfxk\" (UniqueName: \"kubernetes.io/projected/63fd5f2e-8133-4c0b-b15c-006db1f17fed-kube-api-access-nkfxk\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594486 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cghrf\" (UniqueName: \"kubernetes.io/projected/a6bee84d-2233-4962-94e0-bfe3c8f26496-kube-api-access-cghrf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594510 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594522 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594534 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f650c235-dc2c-4737-9624-e2ea4d9ed761-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594545 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.594561 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6bee84d-2233-4962-94e0-bfe3c8f26496-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.595248 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596057 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63fd5f2e-8133-4c0b-b15c-006db1f17fed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596097 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf72j\" (UniqueName: \"kubernetes.io/projected/f650c235-dc2c-4737-9624-e2ea4d9ed761-kube-api-access-zf72j\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596110 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596511 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596539 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.596557 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f650c235-dc2c-4737-9624-e2ea4d9ed761-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.595104 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9adadc01-71e9-4ef1-a02d-4aa566032209" (UID: "9adadc01-71e9-4ef1-a02d-4aa566032209"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.599522 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4" (OuterVolumeSpecName: "kube-api-access-v9cb4") pod "9adadc01-71e9-4ef1-a02d-4aa566032209" (UID: "9adadc01-71e9-4ef1-a02d-4aa566032209"). InnerVolumeSpecName "kube-api-access-v9cb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.599754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n" (OuterVolumeSpecName: "kube-api-access-r7w7n") pod "51a5891a-27e3-404a-b8c8-51c2399e8903" (UID: "51a5891a-27e3-404a-b8c8-51c2399e8903"). InnerVolumeSpecName "kube-api-access-r7w7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.628495 4780 scope.go:117] "RemoveContainer" containerID="516c8de3c33c0337fd76fb32a1510070e91e8a75deedbf4866e716ba08c4c8aa" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.631435 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.637501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data" (OuterVolumeSpecName: "config-data") pod "a6bee84d-2233-4962-94e0-bfe3c8f26496" (UID: "a6bee84d-2233-4962-94e0-bfe3c8f26496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.647416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51a5891a-27e3-404a-b8c8-51c2399e8903" (UID: "51a5891a-27e3-404a-b8c8-51c2399e8903"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.651013 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data" (OuterVolumeSpecName: "config-data") pod "51a5891a-27e3-404a-b8c8-51c2399e8903" (UID: "51a5891a-27e3-404a-b8c8-51c2399e8903"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.660400 4780 scope.go:117] "RemoveContainer" containerID="53eecf6f3abbe44f7e06ac0af7e4deebaf1979eb160ea1159e05a543ac4aea01" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.661616 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7aae-account-create-update-7t6nb"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.694467 4780 scope.go:117] "RemoveContainer" containerID="ba25906e3f30c93cfd93251995fbe6b9adb85d14c0ee7551594e1bd77644bf06" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.709576 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713420 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9cb4\" (UniqueName: \"kubernetes.io/projected/9adadc01-71e9-4ef1-a02d-4aa566032209-kube-api-access-v9cb4\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713450 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7w7n\" (UniqueName: \"kubernetes.io/projected/51a5891a-27e3-404a-b8c8-51c2399e8903-kube-api-access-r7w7n\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713458 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bee84d-2233-4962-94e0-bfe3c8f26496-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713467 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713476 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a5891a-27e3-404a-b8c8-51c2399e8903-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.713483 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adadc01-71e9-4ef1-a02d-4aa566032209-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.715836 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f57f4f6f6-8lqlt"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.722394 4780 scope.go:117] "RemoveContainer" containerID="f4e52f5c3cc7e79bbf52ddff38d3ac4f6046c9da7c1d0d4fb161ff87adbe9315" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.749778 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.751454 4780 scope.go:117] "RemoveContainer" containerID="d0d0ad671ef9d17b1605ad8b7bc48a11301a49a0cc5f0ee6915c47281564ebce" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.769140 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.782843 4780 scope.go:117] "RemoveContainer" containerID="27850fd9ac7fe009c176f0a9206a0fd99a8d233881001985a4dcd3b476a6ee51" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.790526 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.802535 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f929-account-create-update-7rdjx"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.809571 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: E0219 08:44:01.815946 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 08:44:01 crc kubenswrapper[4780]: E0219 08:44:01.816038 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data podName:0bc00934-94b1-4be3-8bf4-845ad08a453f nodeName:}" failed. No retries permitted until 2026-02-19 08:44:09.816017953 +0000 UTC m=+1392.559675402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data") pod "rabbitmq-cell1-server-0" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f") : configmap "rabbitmq-cell1-config-data" not found Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.816247 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.821207 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.825349 4780 scope.go:117] "RemoveContainer" containerID="d2450c95cfe8bb926c8e2c2b6644fad3afd770991b68151d38b6cb8f5a8ae9d1" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.825850 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.832365 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.835304 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.847193 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.858431 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.865640 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.877742 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78d56d997b-gx5gk"] Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.952233 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" path="/var/lib/kubelet/pods/041edb21-581b-493e-a2f1-09e0b3559df1/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.953105 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f6487b-6488-44c0-b2de-2a7f8955a46a" path="/var/lib/kubelet/pods/06f6487b-6488-44c0-b2de-2a7f8955a46a/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.953615 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" path="/var/lib/kubelet/pods/0a69047c-4c8d-4b93-82b3-005a9e83f686/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.955077 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8f6013-5488-4922-94d6-167007269739" path="/var/lib/kubelet/pods/1b8f6013-5488-4922-94d6-167007269739/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.955528 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" path="/var/lib/kubelet/pods/4ef67457-e347-4ea9-b488-32b52af9146c/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.955971 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63fd5f2e-8133-4c0b-b15c-006db1f17fed" path="/var/lib/kubelet/pods/63fd5f2e-8133-4c0b-b15c-006db1f17fed/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.956295 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef81227-694a-4bad-b32b-809d351ec668" path="/var/lib/kubelet/pods/7ef81227-694a-4bad-b32b-809d351ec668/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.957382 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" path="/var/lib/kubelet/pods/80168270-a6db-4ef2-833b-5d2eb2781779/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.960023 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4" path="/var/lib/kubelet/pods/93c4ea3e-9190-44e9-8cd1-fa2ecce7e5d4/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.961345 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" path="/var/lib/kubelet/pods/9b47d55e-fb13-4f2f-8708-a68119e39b60/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.962320 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27398f8-93a8-47a9-a517-b161dad9cc11" path="/var/lib/kubelet/pods/a27398f8-93a8-47a9-a517-b161dad9cc11/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.962801 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd7c548-a04c-4556-bcae-618ae65658de" path="/var/lib/kubelet/pods/acd7c548-a04c-4556-bcae-618ae65658de/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.963258 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c180e0b2-79c3-49b7-bac3-f868aeebd2cc" path="/var/lib/kubelet/pods/c180e0b2-79c3-49b7-bac3-f868aeebd2cc/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.963637 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" path="/var/lib/kubelet/pods/ee75a5b9-0f5b-4db0-ab84-e4848bf382a7/volumes" Feb 19 08:44:01 crc kubenswrapper[4780]: I0219 08:44:01.964656 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" path="/var/lib/kubelet/pods/fa951d8d-6e05-4995-9a80-fb0808216e61/volumes" Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.127807 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 is running failed: container process not found" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.128356 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 is running failed: container process not found" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.128656 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 is running failed: container process not found" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.128728 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="ovn-northd" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.179981 4780 generic.go:334] "Generic (PLEG): container finished" podID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerID="7c909b0dbce18b4a1334fd4ddf863413080b8c52e4f0a329f074299164d924ec" exitCode=0 Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.180104 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerDied","Data":"7c909b0dbce18b4a1334fd4ddf863413080b8c52e4f0a329f074299164d924ec"} Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.189473 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zkbjk" event={"ID":"9adadc01-71e9-4ef1-a02d-4aa566032209","Type":"ContainerDied","Data":"c03c5a5cb4239089e06d7d4f0271245a9122ff0018dbc2776a18d063409c1fcc"} Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.189746 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zkbjk" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.199998 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" event={"ID":"f650c235-dc2c-4737-9624-e2ea4d9ed761","Type":"ContainerDied","Data":"ceb54fb44992352c5007b4258dc19a3b58445c08efc7604ca15a130bccc89c5b"} Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.200055 4780 scope.go:117] "RemoveContainer" containerID="7edbc265ca1fca9fec89c4aa613f291e13e679212a1411e4d048f4165e32dd71" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.200202 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d747cdfb-5j92k" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.217384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51a5891a-27e3-404a-b8c8-51c2399e8903","Type":"ContainerDied","Data":"5cf2d77de3da8e1e323bc1e83691ae27f0a79aa622128fa5ce05302023b8d99f"} Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.217638 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.224871 4780 generic.go:334] "Generic (PLEG): container finished" podID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerID="a465db40f9eca8dcae409a58d79d3d9cd987c42bad7e6a4443d618b97692b1e5" exitCode=0 Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.224973 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84f494b65f-swr5f" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.224987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerDied","Data":"a465db40f9eca8dcae409a58d79d3d9cd987c42bad7e6a4443d618b97692b1e5"} Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.225095 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.252218 4780 scope.go:117] "RemoveContainer" containerID="cf73772a4d01bf87fe0b1f3121d3412df9da363dc17c7d7d04a7882814ffc9ad" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.291485 4780 scope.go:117] "RemoveContainer" containerID="db7558dcf5fe6aacc17c64bdcb258573074ef34bdf43a2c427a18ce8405b1869" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.383246 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.393090 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.396148 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-57d747cdfb-5j92k"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.403319 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.413235 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-84f494b65f-swr5f"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.430079 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431749 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431800 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431855 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431880 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431934 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431966 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.431983 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.432047 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.432160 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\" (UID: \"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.434557 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.434563 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zkbjk"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.435411 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.435509 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.436374 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.437657 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.438903 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.439483 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg" (OuterVolumeSpecName: "kube-api-access-k2qvg") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "kube-api-access-k2qvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.439993 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.443292 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.444569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.452736 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.457553 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.457642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info" (OuterVolumeSpecName: "pod-info") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.496400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf" (OuterVolumeSpecName: "server-conf") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.517097 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data" (OuterVolumeSpecName: "config-data") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.532650 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.532818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.532912 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533200 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533340 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dt2z\" (UniqueName: \"kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533522 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated\") pod \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\" (UID: \"73cb84ca-f3ee-4c97-8c4d-0a1564822827\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533842 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533926 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.533997 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534072 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534171 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534253 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534363 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-kube-api-access-k2qvg\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534440 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534515 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534599 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.534464 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.535450 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.535818 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.536062 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.537912 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z" (OuterVolumeSpecName: "kube-api-access-2dt2z") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "kube-api-access-2dt2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.540022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" (UID: "b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.545609 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.554277 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.555748 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.597218 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c517061-49de-445a-955e-006cbf09b6fd/ovn-northd/0.log" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.597284 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.601739 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "73cb84ca-f3ee-4c97-8c4d-0a1564822827" (UID: "73cb84ca-f3ee-4c97-8c4d-0a1564822827"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635559 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msv56\" (UniqueName: \"kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635628 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635645 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635672 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.635724 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config\") pod \"2c517061-49de-445a-955e-006cbf09b6fd\" (UID: \"2c517061-49de-445a-955e-006cbf09b6fd\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636022 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dt2z\" (UniqueName: \"kubernetes.io/projected/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kube-api-access-2dt2z\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636038 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636047 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636056 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636066 4780 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636085 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636103 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636111 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cb84ca-f3ee-4c97-8c4d-0a1564822827-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636133 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73cb84ca-f3ee-4c97-8c4d-0a1564822827-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636142 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts" (OuterVolumeSpecName: "scripts") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.636758 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.638389 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config" (OuterVolumeSpecName: "config") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.639580 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56" (OuterVolumeSpecName: "kube-api-access-msv56") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "kube-api-access-msv56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.653070 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.671407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.737475 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.737930 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.737984 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.738001 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c517061-49de-445a-955e-006cbf09b6fd-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.738021 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.738033 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c517061-49de-445a-955e-006cbf09b6fd-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.738048 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msv56\" (UniqueName: \"kubernetes.io/projected/2c517061-49de-445a-955e-006cbf09b6fd-kube-api-access-msv56\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.750004 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.754249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2c517061-49de-445a-955e-006cbf09b6fd" (UID: "2c517061-49de-445a-955e-006cbf09b6fd"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.838799 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.838837 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.838871 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.838918 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.838968 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839144 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839213 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839237 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z4zr\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr\") pod \"0bc00934-94b1-4be3-8bf4-845ad08a453f\" (UID: \"0bc00934-94b1-4be3-8bf4-845ad08a453f\") " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839655 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.839674 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c517061-49de-445a-955e-006cbf09b6fd-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.840623 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.843191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.843753 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.844257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr" (OuterVolumeSpecName: "kube-api-access-5z4zr") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "kube-api-access-5z4zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.845359 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.846389 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.847961 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info" (OuterVolumeSpecName: "pod-info") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.852434 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.889747 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data" (OuterVolumeSpecName: "config-data") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.901698 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf" (OuterVolumeSpecName: "server-conf") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.940421 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.940685 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941287 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941392 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941486 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc00934-94b1-4be3-8bf4-845ad08a453f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941583 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z4zr\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-kube-api-access-5z4zr\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941690 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc00934-94b1-4be3-8bf4-845ad08a453f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.941863 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc00934-94b1-4be3-8bf4-845ad08a453f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.942008 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.942107 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.957949 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.972794 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.972881 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.975538 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.975656 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.978282 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.978355 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.978290 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:02 crc kubenswrapper[4780]: E0219 08:44:02.978762 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:44:02 crc kubenswrapper[4780]: I0219 08:44:02.982378 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0bc00934-94b1-4be3-8bf4-845ad08a453f" (UID: "0bc00934-94b1-4be3-8bf4-845ad08a453f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.027998 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.043838 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.043900 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.043943 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.044018 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.044087 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.044175 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.044229 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.044253 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7snm4\" (UniqueName: \"kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4\") pod \"e3467470-e6f9-49c1-b49f-8cea159e5af9\" (UID: \"e3467470-e6f9-49c1-b49f-8cea159e5af9\") " Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.046570 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc00934-94b1-4be3-8bf4-845ad08a453f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.046649 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.051931 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.055299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts" (OuterVolumeSpecName: "scripts") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.055322 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.062338 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4" (OuterVolumeSpecName: "kube-api-access-7snm4") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "kube-api-access-7snm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.075296 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.084329 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data" (OuterVolumeSpecName: "config-data") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.102300 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.111160 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e3467470-e6f9-49c1-b49f-8cea159e5af9" (UID: "e3467470-e6f9-49c1-b49f-8cea159e5af9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148284 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148319 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148329 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148338 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148347 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148355 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148363 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3467470-e6f9-49c1-b49f-8cea159e5af9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.148372 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7snm4\" (UniqueName: \"kubernetes.io/projected/e3467470-e6f9-49c1-b49f-8cea159e5af9-kube-api-access-7snm4\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.235356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73cb84ca-f3ee-4c97-8c4d-0a1564822827","Type":"ContainerDied","Data":"60b46abe092b86d449446c89e2b4b1991a09ce34bdf022cdb388f39941292f30"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.235435 4780 scope.go:117] "RemoveContainer" containerID="a465db40f9eca8dcae409a58d79d3d9cd987c42bad7e6a4443d618b97692b1e5" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.235670 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.240728 4780 generic.go:334] "Generic (PLEG): container finished" podID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerID="f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156" exitCode=0 Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.240787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerDied","Data":"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.240813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc00934-94b1-4be3-8bf4-845ad08a453f","Type":"ContainerDied","Data":"b281f9a655f05156eb2a34b396913745af0b56a30d288667820688b4bf1e2ced"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.240871 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.246881 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c517061-49de-445a-955e-006cbf09b6fd/ovn-northd/0.log" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.247062 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c517061-49de-445a-955e-006cbf09b6fd" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" exitCode=139 Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.247151 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerDied","Data":"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.247162 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.247180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c517061-49de-445a-955e-006cbf09b6fd","Type":"ContainerDied","Data":"7b285ca0f694d9ce1daaa28a431008337d22f2a74c7a26a06abd3f5453e288fc"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.249718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d","Type":"ContainerDied","Data":"1943e82197f795ca64a71f6b980217f179bb2b845d760c1f6da5d6226929a448"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.249806 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.262062 4780 generic.go:334] "Generic (PLEG): container finished" podID="e3467470-e6f9-49c1-b49f-8cea159e5af9" containerID="bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918" exitCode=0 Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.262214 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68c564b849-pqj6g" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.262113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68c564b849-pqj6g" event={"ID":"e3467470-e6f9-49c1-b49f-8cea159e5af9","Type":"ContainerDied","Data":"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.262379 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68c564b849-pqj6g" event={"ID":"e3467470-e6f9-49c1-b49f-8cea159e5af9","Type":"ContainerDied","Data":"d7eae6eb2d4fecffd3005969e0bdbe1acb23278d700df6ee9402a3c32e8ea861"} Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.272261 4780 scope.go:117] "RemoveContainer" containerID="514c00fff12df406f7165a76b74f40b00b1ac7918ea0cd73c453c1c82402e66f" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.327383 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.335921 4780 scope.go:117] "RemoveContainer" containerID="f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.344240 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.362498 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.380393 4780 scope.go:117] "RemoveContainer" containerID="92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.381312 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.395916 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.400054 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.407018 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.412214 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.416598 4780 scope.go:117] "RemoveContainer" containerID="f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.417556 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156\": container with ID starting with f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156 not found: ID does not exist" containerID="f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.417588 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156"} err="failed to get container status \"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156\": rpc error: code = NotFound desc = could not find container \"f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156\": container with ID starting with f71ab6fe841df745310bea9571be2597f2ba857a731ad991cef0b5e663d51156 not found: ID does not exist" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.417609 4780 scope.go:117] "RemoveContainer" containerID="92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.417901 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11\": container with ID starting with 92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11 not found: ID does not exist" containerID="92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.417948 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11"} err="failed to get container status \"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11\": rpc error: code = NotFound desc = could not find container \"92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11\": container with ID starting with 92138aa55ff99cca8c657478fee5e8e3d29dabe5fe564038ea4692de0381fd11 not found: ID does not exist" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.417980 4780 scope.go:117] "RemoveContainer" containerID="bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.423308 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.428205 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-68c564b849-pqj6g"] Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.440618 4780 scope.go:117] "RemoveContainer" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.458472 4780 scope.go:117] "RemoveContainer" containerID="bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.459660 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980\": container with ID starting with bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980 not found: ID does not exist" containerID="bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.459706 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980"} err="failed to get container status \"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980\": rpc error: code = NotFound desc = could not find container \"bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980\": container with ID starting with bd0faed3323446b708de4e9f98c953bf7896291cf6ad9c8958906c34e2163980 not found: ID does not exist" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.459741 4780 scope.go:117] "RemoveContainer" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.460747 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7\": container with ID starting with 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 not found: ID does not exist" containerID="658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.460881 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7"} err="failed to get container status \"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7\": rpc error: code = NotFound desc = could not find container \"658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7\": container with ID starting with 658c0600d84b00c8d85a39ed86fd0080c4cee63bc7fe518116dd67a00304b9d7 not found: ID does not exist" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.461014 4780 scope.go:117] "RemoveContainer" containerID="7c909b0dbce18b4a1334fd4ddf863413080b8c52e4f0a329f074299164d924ec" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.481157 4780 scope.go:117] "RemoveContainer" containerID="72b514fc5a5844ba34d80cc5567e9e8a5b704063a3ee0c1d2f21802a766c98c3" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.502081 4780 scope.go:117] "RemoveContainer" containerID="bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.525597 4780 scope.go:117] "RemoveContainer" containerID="bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.528768 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918\": container with ID starting with bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918 not found: ID does not exist" containerID="bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.528811 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918"} err="failed to get container status \"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918\": rpc error: code = NotFound desc = could not find container \"bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918\": container with ID starting with bbf3139e58791b1ff4b0d65514abe2eef10c3ba14010818896776871a00e7918 not found: ID does not exist" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.595939 4780 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 19 08:44:03 crc kubenswrapper[4780]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T08:43:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 08:44:03 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 08:44:03 crc kubenswrapper[4780]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-nj9cs" message=< Feb 19 08:44:03 crc kubenswrapper[4780]: Exiting ovn-controller (1) [FAILED] Feb 19 08:44:03 crc kubenswrapper[4780]: Killing ovn-controller (1) [ OK ] Feb 19 08:44:03 crc kubenswrapper[4780]: Killing ovn-controller (1) with SIGKILL [ OK ] Feb 19 08:44:03 crc kubenswrapper[4780]: 2026-02-19T08:43:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 08:44:03 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 08:44:03 crc kubenswrapper[4780]: > Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.596381 4780 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 19 08:44:03 crc kubenswrapper[4780]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T08:43:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 08:44:03 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 08:44:03 crc kubenswrapper[4780]: > pod="openstack/ovn-controller-nj9cs" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" containerID="cri-o://01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.596418 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-nj9cs" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" containerID="cri-o://01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" gracePeriod=22 Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.596899 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nj9cs" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" probeResult="failure" output="" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.597690 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3 is running failed: container process not found" containerID="01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.598048 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3 is running failed: container process not found" containerID="01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.600309 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3 is running failed: container process not found" containerID="01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.600355 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-nj9cs" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.950411 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" path="/var/lib/kubelet/pods/0bc00934-94b1-4be3-8bf4-845ad08a453f/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.951235 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c517061-49de-445a-955e-006cbf09b6fd" path="/var/lib/kubelet/pods/2c517061-49de-445a-955e-006cbf09b6fd/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.951816 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" path="/var/lib/kubelet/pods/51a5891a-27e3-404a-b8c8-51c2399e8903/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.952830 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" path="/var/lib/kubelet/pods/5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.953682 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" path="/var/lib/kubelet/pods/73cb84ca-f3ee-4c97-8c4d-0a1564822827/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.954792 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adadc01-71e9-4ef1-a02d-4aa566032209" path="/var/lib/kubelet/pods/9adadc01-71e9-4ef1-a02d-4aa566032209/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.955149 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" path="/var/lib/kubelet/pods/a6bee84d-2233-4962-94e0-bfe3c8f26496/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.956267 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" path="/var/lib/kubelet/pods/b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.957517 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3467470-e6f9-49c1-b49f-8cea159e5af9" path="/var/lib/kubelet/pods/e3467470-e6f9-49c1-b49f-8cea159e5af9/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: I0219 08:44:03.958320 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" path="/var/lib/kubelet/pods/f650c235-dc2c-4737-9624-e2ea4d9ed761/volumes" Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.991575 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d is running failed: container process not found" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.991900 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d is running failed: container process not found" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.992231 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d is running failed: container process not found" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 08:44:03 crc kubenswrapper[4780]: E0219 08:44:03.992290 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.095816 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-565f58cc6f-vwtvf" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.095860 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-565f58cc6f-vwtvf" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.194739 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.284145 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nj9cs_d1721266-ba6d-49a4-b30d-049d4f4e1978/ovn-controller/0.log" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.284457 4780 generic.go:334] "Generic (PLEG): container finished" podID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerID="01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" exitCode=137 Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.284528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs" event={"ID":"d1721266-ba6d-49a4-b30d-049d4f4e1978","Type":"ContainerDied","Data":"01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3"} Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.285755 4780 generic.go:334] "Generic (PLEG): container finished" podID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" exitCode=0 Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.285792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d44b6c27-15b7-4e04-ac73-742091b1b33d","Type":"ContainerDied","Data":"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d"} Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.285808 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d44b6c27-15b7-4e04-ac73-742091b1b33d","Type":"ContainerDied","Data":"a520e993f2eb26379dc054ce24ff36a6d00c7dca79a3fe0ba2dd46a863586153"} Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.285824 4780 scope.go:117] "RemoveContainer" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.285923 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.287294 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzwpv\" (UniqueName: \"kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv\") pod \"d44b6c27-15b7-4e04-ac73-742091b1b33d\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.287864 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle\") pod \"d44b6c27-15b7-4e04-ac73-742091b1b33d\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.287907 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data\") pod \"d44b6c27-15b7-4e04-ac73-742091b1b33d\" (UID: \"d44b6c27-15b7-4e04-ac73-742091b1b33d\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.310628 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv" (OuterVolumeSpecName: "kube-api-access-hzwpv") pod "d44b6c27-15b7-4e04-ac73-742091b1b33d" (UID: "d44b6c27-15b7-4e04-ac73-742091b1b33d"). InnerVolumeSpecName "kube-api-access-hzwpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.314586 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data" (OuterVolumeSpecName: "config-data") pod "d44b6c27-15b7-4e04-ac73-742091b1b33d" (UID: "d44b6c27-15b7-4e04-ac73-742091b1b33d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.317643 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d44b6c27-15b7-4e04-ac73-742091b1b33d" (UID: "d44b6c27-15b7-4e04-ac73-742091b1b33d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.339730 4780 scope.go:117] "RemoveContainer" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" Feb 19 08:44:04 crc kubenswrapper[4780]: E0219 08:44:04.340186 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d\": container with ID starting with a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d not found: ID does not exist" containerID="a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.340215 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d"} err="failed to get container status \"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d\": rpc error: code = NotFound desc = could not find container \"a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d\": container with ID starting with a8aa837400c30c0e233fd723b08658410dc85d6cbb3af1d012cbb7e3f520aa0d not found: ID does not exist" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.351328 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nj9cs_d1721266-ba6d-49a4-b30d-049d4f4e1978/ovn-controller/0.log" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.351378 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391496 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391639 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391676 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391761 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tpcl\" (UniqueName: \"kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391839 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.391886 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs\") pod \"d1721266-ba6d-49a4-b30d-049d4f4e1978\" (UID: \"d1721266-ba6d-49a4-b30d-049d4f4e1978\") " Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.392267 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzwpv\" (UniqueName: \"kubernetes.io/projected/d44b6c27-15b7-4e04-ac73-742091b1b33d-kube-api-access-hzwpv\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.392281 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.392310 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44b6c27-15b7-4e04-ac73-742091b1b33d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.395211 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run" (OuterVolumeSpecName: "var-run") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.395270 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.395909 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.402230 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl" (OuterVolumeSpecName: "kube-api-access-2tpcl") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "kube-api-access-2tpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.402544 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts" (OuterVolumeSpecName: "scripts") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.415509 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.436248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "d1721266-ba6d-49a4-b30d-049d4f4e1978" (UID: "d1721266-ba6d-49a4-b30d-049d4f4e1978"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.493689 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1721266-ba6d-49a4-b30d-049d4f4e1978-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494257 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494318 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494371 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tpcl\" (UniqueName: \"kubernetes.io/projected/d1721266-ba6d-49a4-b30d-049d4f4e1978-kube-api-access-2tpcl\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494424 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494474 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1721266-ba6d-49a4-b30d-049d4f4e1978-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.494532 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1721266-ba6d-49a4-b30d-049d4f4e1978-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.611942 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:44:04 crc kubenswrapper[4780]: I0219 08:44:04.617003 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.345871 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nj9cs_d1721266-ba6d-49a4-b30d-049d4f4e1978/ovn-controller/0.log" Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.346059 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nj9cs" Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.346090 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nj9cs" event={"ID":"d1721266-ba6d-49a4-b30d-049d4f4e1978","Type":"ContainerDied","Data":"a53d68d89a6d0cd9e41606129b505dc54db5c9378f37093dae27e02e7f78a906"} Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.346149 4780 scope.go:117] "RemoveContainer" containerID="01e526754a8186cc197f578e6e4b762cc364d55b8be1f7128a12dfaeab666cd3" Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.410003 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.414496 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nj9cs"] Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.949733 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" path="/var/lib/kubelet/pods/d1721266-ba6d-49a4-b30d-049d4f4e1978/volumes" Feb 19 08:44:05 crc kubenswrapper[4780]: I0219 08:44:05.951361 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" path="/var/lib/kubelet/pods/d44b6c27-15b7-4e04-ac73-742091b1b33d/volumes" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.396886 4780 generic.go:334] "Generic (PLEG): container finished" podID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerID="e6ad5dd9860e6a4ae8010d509505f12ab7f5487560b9bde69360e238499f4fd4" exitCode=0 Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.396975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerDied","Data":"e6ad5dd9860e6a4ae8010d509505f12ab7f5487560b9bde69360e238499f4fd4"} Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.569580 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625020 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625169 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625223 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625264 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxztj\" (UniqueName: \"kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.625311 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config\") pod \"8a16f10c-8261-47f0-949b-abe6aaf7a408\" (UID: \"8a16f10c-8261-47f0-949b-abe6aaf7a408\") " Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.629596 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj" (OuterVolumeSpecName: "kube-api-access-qxztj") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "kube-api-access-qxztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.630335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.662855 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.663553 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config" (OuterVolumeSpecName: "config") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.670770 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.682241 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.684304 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a16f10c-8261-47f0-949b-abe6aaf7a408" (UID: "8a16f10c-8261-47f0-949b-abe6aaf7a408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726445 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726480 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726492 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxztj\" (UniqueName: \"kubernetes.io/projected/8a16f10c-8261-47f0-949b-abe6aaf7a408-kube-api-access-qxztj\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726503 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726511 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726519 4780 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:06 crc kubenswrapper[4780]: I0219 08:44:06.726527 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a16f10c-8261-47f0-949b-abe6aaf7a408-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.410784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f45bb7d89-m7r5b" event={"ID":"8a16f10c-8261-47f0-949b-abe6aaf7a408","Type":"ContainerDied","Data":"64fed94c4c48749e5472f14f55be888e45dee9103ea0cef768200af82647f22a"} Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.411051 4780 scope.go:117] "RemoveContainer" containerID="0d36fd403a1d035939e04b1b25e02143c36dd932d5f72c7108d1f6415319ef45" Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.411157 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f45bb7d89-m7r5b" Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.445433 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.450436 4780 scope.go:117] "RemoveContainer" containerID="e6ad5dd9860e6a4ae8010d509505f12ab7f5487560b9bde69360e238499f4fd4" Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.451196 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f45bb7d89-m7r5b"] Feb 19 08:44:07 crc kubenswrapper[4780]: I0219 08:44:07.952764 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" path="/var/lib/kubelet/pods/8a16f10c-8261-47f0-949b-abe6aaf7a408/volumes" Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.969857 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.970602 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.970994 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.971035 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.974952 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.976529 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.977962 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:07 crc kubenswrapper[4780]: E0219 08:44:07.978024 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.968700 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.969968 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.970087 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.970379 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.970418 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.971614 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.972966 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:12 crc kubenswrapper[4780]: E0219 08:44:12.973016 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.969557 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.970279 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.970626 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.970681 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.970722 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.971878 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.972912 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:17 crc kubenswrapper[4780]: E0219 08:44:17.972966 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.968895 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.969803 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.970464 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.970499 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.970802 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.974701 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.976609 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 08:44:22 crc kubenswrapper[4780]: E0219 08:44:22.976693 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-s8k96" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.526194 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.593177 4780 generic.go:334] "Generic (PLEG): container finished" podID="81f6be70-b99e-42e2-ada9-535daa67785c" containerID="2188a10120d8bd63dfe375b654626303aec9847b21eac95df015ea5bd7642279" exitCode=137 Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.593237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"2188a10120d8bd63dfe375b654626303aec9847b21eac95df015ea5bd7642279"} Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.595576 4780 generic.go:334] "Generic (PLEG): container finished" podID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerID="7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e" exitCode=137 Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.595619 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerDied","Data":"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e"} Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.595646 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98f20ebd-43c0-4332-988a-f487d7704bc1","Type":"ContainerDied","Data":"129373fb6224c5c1717d2e3db01c6e03e85b15185ba660814ed94ef67b5055fa"} Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.595666 4780 scope.go:117] "RemoveContainer" containerID="ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.595676 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.619050 4780 scope.go:117] "RemoveContainer" containerID="7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622487 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622505 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.622599 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5jzf\" (UniqueName: \"kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.623737 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98f20ebd-43c0-4332-988a-f487d7704bc1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.629897 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.629945 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts" (OuterVolumeSpecName: "scripts") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.631437 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf" (OuterVolumeSpecName: "kube-api-access-x5jzf") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "kube-api-access-x5jzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.641952 4780 scope.go:117] "RemoveContainer" containerID="ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5" Feb 19 08:44:25 crc kubenswrapper[4780]: E0219 08:44:25.642528 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5\": container with ID starting with ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5 not found: ID does not exist" containerID="ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.642578 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5"} err="failed to get container status \"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5\": rpc error: code = NotFound desc = could not find container \"ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5\": container with ID starting with ba3f63846f3064917dab79f9e20689c990ae6093ca7c68e5c8baa72829e5c8a5 not found: ID does not exist" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.642627 4780 scope.go:117] "RemoveContainer" containerID="7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e" Feb 19 08:44:25 crc kubenswrapper[4780]: E0219 08:44:25.643164 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e\": container with ID starting with 7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e not found: ID does not exist" containerID="7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.643205 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e"} err="failed to get container status \"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e\": rpc error: code = NotFound desc = could not find container \"7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e\": container with ID starting with 7eb59e6b04249031a3c25e4d694f8c947d826ff5ad93844f5728cc21c5d03f9e not found: ID does not exist" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.672056 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.681060 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.734462 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data" (OuterVolumeSpecName: "config-data") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.734866 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") pod \"98f20ebd-43c0-4332-988a-f487d7704bc1\" (UID: \"98f20ebd-43c0-4332-988a-f487d7704bc1\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.734989 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735047 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: W0219 08:44:25.735064 4780 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/98f20ebd-43c0-4332-988a-f487d7704bc1/volumes/kubernetes.io~secret/config-data Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735087 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735201 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwpq\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq\") pod \"81f6be70-b99e-42e2-ada9-535daa67785c\" (UID: \"81f6be70-b99e-42e2-ada9-535daa67785c\") " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735733 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735764 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5jzf\" (UniqueName: \"kubernetes.io/projected/98f20ebd-43c0-4332-988a-f487d7704bc1-kube-api-access-x5jzf\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735782 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735880 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data" (OuterVolumeSpecName: "config-data") pod "98f20ebd-43c0-4332-988a-f487d7704bc1" (UID: "98f20ebd-43c0-4332-988a-f487d7704bc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.735998 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache" (OuterVolumeSpecName: "cache") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.738608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq" (OuterVolumeSpecName: "kube-api-access-sdwpq") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "kube-api-access-sdwpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.739022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock" (OuterVolumeSpecName: "lock") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.741051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.742757 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837665 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f20ebd-43c0-4332-988a-f487d7704bc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837690 4780 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-cache\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837710 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837720 4780 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81f6be70-b99e-42e2-ada9-535daa67785c-lock\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837729 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.837740 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwpq\" (UniqueName: \"kubernetes.io/projected/81f6be70-b99e-42e2-ada9-535daa67785c-kube-api-access-sdwpq\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.851680 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.933894 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.941146 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:25 crc kubenswrapper[4780]: I0219 08:44:25.951098 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.050979 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81f6be70-b99e-42e2-ada9-535daa67785c" (UID: "81f6be70-b99e-42e2-ada9-535daa67785c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.145266 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f6be70-b99e-42e2-ada9-535daa67785c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.158074 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s8k96_6d459ce0-3049-4b3a-a076-682771965fc2/ovs-vswitchd/0.log" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.158989 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcwm\" (UniqueName: \"kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246494 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246526 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246588 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246592 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log" (OuterVolumeSpecName: "var-log") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246647 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run\") pod \"6d459ce0-3049-4b3a-a076-682771965fc2\" (UID: \"6d459ce0-3049-4b3a-a076-682771965fc2\") " Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246688 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run" (OuterVolumeSpecName: "var-run") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246675 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.246727 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib" (OuterVolumeSpecName: "var-lib") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.247530 4780 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.247552 4780 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.247564 4780 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-lib\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.247572 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d459ce0-3049-4b3a-a076-682771965fc2-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.247790 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts" (OuterVolumeSpecName: "scripts") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.249529 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm" (OuterVolumeSpecName: "kube-api-access-jmcwm") pod "6d459ce0-3049-4b3a-a076-682771965fc2" (UID: "6d459ce0-3049-4b3a-a076-682771965fc2"). InnerVolumeSpecName "kube-api-access-jmcwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.349101 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcwm\" (UniqueName: \"kubernetes.io/projected/6d459ce0-3049-4b3a-a076-682771965fc2-kube-api-access-jmcwm\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.349171 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d459ce0-3049-4b3a-a076-682771965fc2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.609694 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81f6be70-b99e-42e2-ada9-535daa67785c","Type":"ContainerDied","Data":"e693c06a94c81a6b75dd845d4b0b434065c56c5699f43e57240307a4ebbfc295"} Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.610032 4780 scope.go:117] "RemoveContainer" containerID="2188a10120d8bd63dfe375b654626303aec9847b21eac95df015ea5bd7642279" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.610218 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.613555 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s8k96_6d459ce0-3049-4b3a-a076-682771965fc2/ovs-vswitchd/0.log" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.614577 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d459ce0-3049-4b3a-a076-682771965fc2" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" exitCode=137 Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.614617 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s8k96" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.614656 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerDied","Data":"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9"} Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.614704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s8k96" event={"ID":"6d459ce0-3049-4b3a-a076-682771965fc2","Type":"ContainerDied","Data":"0a83fe931441676f24f3e4f0dc0927aa9482b5f65344667293aa1db23099a60f"} Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.631609 4780 scope.go:117] "RemoveContainer" containerID="356b8ecfe832de6a4352add852082b58adb0d47998d76bb8be15bf9b809ca6ba" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.656913 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.666191 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.666424 4780 scope.go:117] "RemoveContainer" containerID="c564e5d19b14dd5427df8bd9f7c31fa51688096980104c53a0592b494393444e" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.670877 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.674906 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-s8k96"] Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.683268 4780 scope.go:117] "RemoveContainer" containerID="7308fc7b05b12c3aded56d1b465656996edb4a1aaac742755f2267f8856a1738" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.701912 4780 scope.go:117] "RemoveContainer" containerID="931137663cf98f50b611ab7a350ff29c75096b512f3bd8479a3da859ff9249dd" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.722891 4780 scope.go:117] "RemoveContainer" containerID="e346987639804be13b9078ad7625d170b9ddd9507142084a4d071af736f1e9e5" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.750977 4780 scope.go:117] "RemoveContainer" containerID="cd76a02c1c0dd51248f9e5d74c516ac9964b548ece3e6feef086a11ee77f79b3" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.801241 4780 scope.go:117] "RemoveContainer" containerID="1ebc68436865188cf2fad212eae5f1403c1f801aa9e70f545ee26cbab63c85e3" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.817966 4780 scope.go:117] "RemoveContainer" containerID="a84d96cb9630fe50228a686a5d919a25906bd58fdea9bb24ab6c2a2aa7322132" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.833755 4780 scope.go:117] "RemoveContainer" containerID="877371d495b56baac057d9902da78b62d27253e5f8d0b6e010c755c8c3c39f70" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.849991 4780 scope.go:117] "RemoveContainer" containerID="b664265cd69a21ec8233ee532fb4e98fecc2e465f25adc32ed09379a81449626" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.868848 4780 scope.go:117] "RemoveContainer" containerID="fca6d50f8c31e787a22a0309a33a9f858675c6427c2b2a393662ddffb55cb5cd" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.889805 4780 scope.go:117] "RemoveContainer" containerID="cfdce07de9a5a0bf8e587fc69e8297c91b8913ec02aceb794adf35e345ef0d13" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.913171 4780 scope.go:117] "RemoveContainer" containerID="4f00dfcd9db180a87181a3c3d01eba45c08d63f7da35c02e1f8eb76e78aba164" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.941867 4780 scope.go:117] "RemoveContainer" containerID="cc54bc275542f23253910477331cac8c186c8fe35ad45eef8b4392d021ab1bd6" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.962001 4780 scope.go:117] "RemoveContainer" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" Feb 19 08:44:26 crc kubenswrapper[4780]: I0219 08:44:26.982920 4780 scope.go:117] "RemoveContainer" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.004290 4780 scope.go:117] "RemoveContainer" containerID="21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.037693 4780 scope.go:117] "RemoveContainer" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" Feb 19 08:44:27 crc kubenswrapper[4780]: E0219 08:44:27.038214 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9\": container with ID starting with b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9 not found: ID does not exist" containerID="b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.038448 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9"} err="failed to get container status \"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9\": rpc error: code = NotFound desc = could not find container \"b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9\": container with ID starting with b4d61fc0ad4ad2cd6ea3fc9271e96ed27251b78da91a62bb63cc7c353ee52da9 not found: ID does not exist" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.038495 4780 scope.go:117] "RemoveContainer" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" Feb 19 08:44:27 crc kubenswrapper[4780]: E0219 08:44:27.039006 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb\": container with ID starting with 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb not found: ID does not exist" containerID="253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.039053 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb"} err="failed to get container status \"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb\": rpc error: code = NotFound desc = could not find container \"253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb\": container with ID starting with 253753c921fa12570df4682679d2ee7577b01ae1f1b9e11c4696485f1a2ff9eb not found: ID does not exist" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.039082 4780 scope.go:117] "RemoveContainer" containerID="21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7" Feb 19 08:44:27 crc kubenswrapper[4780]: E0219 08:44:27.039953 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7\": container with ID starting with 21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7 not found: ID does not exist" containerID="21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.040069 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7"} err="failed to get container status \"21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7\": rpc error: code = NotFound desc = could not find container \"21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7\": container with ID starting with 21bcf812ee6341045e6b74aaf53f190657232cde0c9f4a979b2a7259b22495b7 not found: ID does not exist" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.954462 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" path="/var/lib/kubelet/pods/6d459ce0-3049-4b3a-a076-682771965fc2/volumes" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.955850 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" path="/var/lib/kubelet/pods/81f6be70-b99e-42e2-ada9-535daa67785c/volumes" Feb 19 08:44:27 crc kubenswrapper[4780]: I0219 08:44:27.960022 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" path="/var/lib/kubelet/pods/98f20ebd-43c0-4332-988a-f487d7704bc1/volumes" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.147735 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb"] Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148508 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148523 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148538 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="cinder-scheduler" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148544 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="cinder-scheduler" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148554 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148560 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148566 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3467470-e6f9-49c1-b49f-8cea159e5af9" containerName="keystone-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148572 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3467470-e6f9-49c1-b49f-8cea159e5af9" containerName="keystone-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148583 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148590 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148599 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148605 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148617 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148622 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148632 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148638 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148644 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="probe" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148650 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="probe" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148658 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148664 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148672 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148677 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148684 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148690 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148701 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148707 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148717 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="setup-container" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148723 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="setup-container" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148734 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148739 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148748 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-reaper" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148753 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-reaper" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148765 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-central-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148771 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-central-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148777 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="setup-container" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148784 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="setup-container" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148790 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148796 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148804 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148810 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148819 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="sg-core" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148824 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="sg-core" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148834 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148840 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148848 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148853 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148865 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server-init" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148871 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server-init" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148878 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148884 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148892 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148898 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-server" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148908 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148914 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148920 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148926 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-server" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148935 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="rsync" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148941 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="rsync" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148948 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27398f8-93a8-47a9-a517-b161dad9cc11" containerName="kube-state-metrics" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148955 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27398f8-93a8-47a9-a517-b161dad9cc11" containerName="kube-state-metrics" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148963 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148969 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148974 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148980 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.148986 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.148991 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149000 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149005 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149014 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd7c548-a04c-4556-bcae-618ae65658de" containerName="memcached" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149019 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd7c548-a04c-4556-bcae-618ae65658de" containerName="memcached" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149027 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149033 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149042 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149048 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-server" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149058 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerName="nova-cell0-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149063 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerName="nova-cell0-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149074 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149080 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149090 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149096 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149109 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="openstack-network-exporter" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149115 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="openstack-network-exporter" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149147 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149153 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149161 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149166 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149176 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149185 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-server" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149195 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149202 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149212 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="mysql-bootstrap" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149219 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="mysql-bootstrap" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149229 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="ovn-northd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149236 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="ovn-northd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149244 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149251 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149262 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149269 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149278 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="galera" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149284 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="galera" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149294 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149494 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149503 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149511 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-api" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149520 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149527 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149537 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-expirer" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149544 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-expirer" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149553 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149560 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149572 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-notification-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149580 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-notification-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149593 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149600 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149610 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149618 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-log" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149632 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="swift-recon-cron" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149639 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="swift-recon-cron" Feb 19 08:45:00 crc kubenswrapper[4780]: E0219 08:45:00.149649 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149656 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149793 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-notification-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149807 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149819 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovsdb-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149830 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee75a5b9-0f5b-4db0-ab84-e4848bf382a7" containerName="nova-metadata-metadata" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149839 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149846 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149858 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149866 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="cinder-scheduler" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149890 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d459ce0-3049-4b3a-a076-682771965fc2" containerName="ovs-vswitchd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149900 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149910 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b814fc4c-5e70-4b85-84b0-dcfc4cd4c16d" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149920 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cb84ca-f3ee-4c97-8c4d-0a1564822827" containerName="galera" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149929 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="ovn-northd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149937 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149948 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f20ebd-43c0-4332-988a-f487d7704bc1" containerName="probe" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149958 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44b6c27-15b7-4e04-ac73-742091b1b33d" containerName="nova-cell1-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149969 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149979 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149988 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.149998 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150009 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-updater" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150016 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-expirer" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150027 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150057 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150066 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150075 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150087 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-reaper" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150096 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150104 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b47d55e-fb13-4f2f-8708-a68119e39b60" containerName="nova-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150115 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3467470-e6f9-49c1-b49f-8cea159e5af9" containerName="keystone-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150142 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef67457-e347-4ea9-b488-32b52af9146c" containerName="barbican-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150150 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="041edb21-581b-493e-a2f1-09e0b3559df1" containerName="cinder-api-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150159 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="rsync" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150171 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-server" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150178 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="ceilometer-central-agent" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150191 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150198 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150208 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="swift-recon-cron" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150217 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef81227-694a-4bad-b32b-809d351ec668" containerName="proxy-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150225 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd7c548-a04c-4556-bcae-618ae65658de" containerName="memcached" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150236 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27398f8-93a8-47a9-a517-b161dad9cc11" containerName="kube-state-metrics" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150247 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="object-replicator" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150254 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a16f10c-8261-47f0-949b-abe6aaf7a408" containerName="neutron-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150262 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-httpd" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150275 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="container-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150286 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc00934-94b1-4be3-8bf4-845ad08a453f" containerName="rabbitmq" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150296 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bee84d-2233-4962-94e0-bfe3c8f26496" containerName="sg-core" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150305 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1721266-ba6d-49a4-b30d-049d4f4e1978" containerName="ovn-controller" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150315 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa951d8d-6e05-4995-9a80-fb0808216e61" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150327 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80168270-a6db-4ef2-833b-5d2eb2781779" containerName="placement-api" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150338 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c517061-49de-445a-955e-006cbf09b6fd" containerName="openstack-network-exporter" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150349 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a69047c-4c8d-4b93-82b3-005a9e83f686" containerName="glance-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150359 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eae0b6e-a27a-47aa-8dc1-cf743b5f8aba" containerName="barbican-worker-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150373 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f6be70-b99e-42e2-ada9-535daa67785c" containerName="account-auditor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150386 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650c235-dc2c-4737-9624-e2ea4d9ed761" containerName="barbican-keystone-listener-log" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150394 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="51a5891a-27e3-404a-b8c8-51c2399e8903" containerName="nova-cell0-conductor-conductor" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.150905 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.153607 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.153693 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.167486 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb"] Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.182889 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscb4\" (UniqueName: \"kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.182944 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.183042 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.284796 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.284908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscb4\" (UniqueName: \"kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.284961 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.286512 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.290994 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.302619 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscb4\" (UniqueName: \"kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4\") pod \"collect-profiles-29524845-592cb\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.483862 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.929318 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb"] Feb 19 08:45:00 crc kubenswrapper[4780]: I0219 08:45:00.954316 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" event={"ID":"6125ea06-2501-442e-b5b1-d44d92f9e162","Type":"ContainerStarted","Data":"2cbb9214e165c0a80ba578926e8f3df3993b6e007c3265c7e4f62599e0ed5405"} Feb 19 08:45:01 crc kubenswrapper[4780]: I0219 08:45:01.986626 4780 generic.go:334] "Generic (PLEG): container finished" podID="6125ea06-2501-442e-b5b1-d44d92f9e162" containerID="f8f7e3f045950df5a293e367cc6a8fcc792e550728934fc773472a1eaa07e01e" exitCode=0 Feb 19 08:45:01 crc kubenswrapper[4780]: I0219 08:45:01.986732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" event={"ID":"6125ea06-2501-442e-b5b1-d44d92f9e162","Type":"ContainerDied","Data":"f8f7e3f045950df5a293e367cc6a8fcc792e550728934fc773472a1eaa07e01e"} Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.359852 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.435407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vscb4\" (UniqueName: \"kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4\") pod \"6125ea06-2501-442e-b5b1-d44d92f9e162\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.435516 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume\") pod \"6125ea06-2501-442e-b5b1-d44d92f9e162\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.435575 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume\") pod \"6125ea06-2501-442e-b5b1-d44d92f9e162\" (UID: \"6125ea06-2501-442e-b5b1-d44d92f9e162\") " Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.436407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume" (OuterVolumeSpecName: "config-volume") pod "6125ea06-2501-442e-b5b1-d44d92f9e162" (UID: "6125ea06-2501-442e-b5b1-d44d92f9e162"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.441810 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4" (OuterVolumeSpecName: "kube-api-access-vscb4") pod "6125ea06-2501-442e-b5b1-d44d92f9e162" (UID: "6125ea06-2501-442e-b5b1-d44d92f9e162"). InnerVolumeSpecName "kube-api-access-vscb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.442301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6125ea06-2501-442e-b5b1-d44d92f9e162" (UID: "6125ea06-2501-442e-b5b1-d44d92f9e162"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.537079 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vscb4\" (UniqueName: \"kubernetes.io/projected/6125ea06-2501-442e-b5b1-d44d92f9e162-kube-api-access-vscb4\") on node \"crc\" DevicePath \"\"" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.537117 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6125ea06-2501-442e-b5b1-d44d92f9e162-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:45:03 crc kubenswrapper[4780]: I0219 08:45:03.537163 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6125ea06-2501-442e-b5b1-d44d92f9e162-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:45:04 crc kubenswrapper[4780]: I0219 08:45:04.010024 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" event={"ID":"6125ea06-2501-442e-b5b1-d44d92f9e162","Type":"ContainerDied","Data":"2cbb9214e165c0a80ba578926e8f3df3993b6e007c3265c7e4f62599e0ed5405"} Feb 19 08:45:04 crc kubenswrapper[4780]: I0219 08:45:04.010087 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cbb9214e165c0a80ba578926e8f3df3993b6e007c3265c7e4f62599e0ed5405" Feb 19 08:45:04 crc kubenswrapper[4780]: I0219 08:45:04.010374 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb" Feb 19 08:45:36 crc kubenswrapper[4780]: I0219 08:45:36.336191 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:45:36 crc kubenswrapper[4780]: I0219 08:45:36.336784 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.317973 4780 scope.go:117] "RemoveContainer" containerID="553f577ac51e9372da372faf811a0c3edeee048a97a94209d758afdc77103541" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.348687 4780 scope.go:117] "RemoveContainer" containerID="6393f12f3cb4c5e2df5ae49ff25ea66f3e1da538d8a43eb9dcee422b85672c7c" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.387678 4780 scope.go:117] "RemoveContainer" containerID="babcffd09ac04f4a740adc86c8d0876c0829bc381dfdb0ee7a062a05f60aa1f7" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.421497 4780 scope.go:117] "RemoveContainer" containerID="13d9e63a67f4cabf81e3b618c9388b5c9c94f117d10fd2292810d3542a2eb1dd" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.442897 4780 scope.go:117] "RemoveContainer" containerID="68d8d2a238df0c56bfedd746eb03803f3b0221f54d574b5db9d92b68a77195e0" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.474641 4780 scope.go:117] "RemoveContainer" containerID="f49589050a71a1e8ab881d86d68db619d9b1db7fd6611fa5d97f0caa42d5b8d7" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.500313 4780 scope.go:117] "RemoveContainer" containerID="43f3807845d6b9d2b1024f020aa420e73a2feee64ccc631c39ac44b44af591b4" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.522826 4780 scope.go:117] "RemoveContainer" containerID="cc27237022f809cd63a9926b55592579736e25ba63040266efa1bf7931831723" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.550294 4780 scope.go:117] "RemoveContainer" containerID="f8d6415b61380e5d7e78f85a4160c7b86ef8975d68a5fbf9fbda2814a02de3b0" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.583160 4780 scope.go:117] "RemoveContainer" containerID="ce68a2104a99b076a82e6dfc356074c88a457b853c24a0d53b84bead36b5e461" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.627636 4780 scope.go:117] "RemoveContainer" containerID="b350240a4378abb9db72f535c5c98f2291baf23455875cdea354e0f4ed27661f" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.677106 4780 scope.go:117] "RemoveContainer" containerID="eff7393185561c27413a75db83884015c05cba991a750ef914d6173d4cfdb168" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.697732 4780 scope.go:117] "RemoveContainer" containerID="6ca9e8ee84cdf435111e299c530163a4bd65ba3935f02d328d063f1fd872d17a" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.719464 4780 scope.go:117] "RemoveContainer" containerID="71c4933d930bf50c88e918b9407c4855c895d0148329d0083c50ac79c8bebef9" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.747509 4780 scope.go:117] "RemoveContainer" containerID="b507eaf6aa48bfdaa9db8ae94ab3426bff9bdb3667fb4f653a3340a6b2340be5" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.838118 4780 scope.go:117] "RemoveContainer" containerID="1e831d18d35ae8f44d6b8a238c4cab50690385fc043885c5021b32e17ebeb213" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.886483 4780 scope.go:117] "RemoveContainer" containerID="6780cd8bdf4cd22bd3d12173f36b7b9af8652b38cd77695bf228aa1707a08ee0" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.924404 4780 scope.go:117] "RemoveContainer" containerID="5c1a8193962c4508a41af80a87835fe20e9486a20d8cc9ca42f8fb94ff2a53a8" Feb 19 08:46:01 crc kubenswrapper[4780]: I0219 08:46:01.960389 4780 scope.go:117] "RemoveContainer" containerID="915f81b7075274d01be1a9a53b49c578e0ee7d68fa4534042cdbf87c6173e436" Feb 19 08:46:02 crc kubenswrapper[4780]: I0219 08:46:02.004714 4780 scope.go:117] "RemoveContainer" containerID="8a4822323cbe0de91a7339cdab1edaa575463e00b274ef17baf200c5215124e9" Feb 19 08:46:06 crc kubenswrapper[4780]: I0219 08:46:06.340201 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:46:06 crc kubenswrapper[4780]: I0219 08:46:06.340754 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.336070 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.337052 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.337159 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.338185 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.338328 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" gracePeriod=600 Feb 19 08:46:36 crc kubenswrapper[4780]: E0219 08:46:36.473268 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.938688 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" exitCode=0 Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.938747 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75"} Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.938843 4780 scope.go:117] "RemoveContainer" containerID="faebb4e2dff7f5e3e2970ac268d8a29ca21fbe03139102c6901b8c69fd561a84" Feb 19 08:46:36 crc kubenswrapper[4780]: I0219 08:46:36.939682 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:46:36 crc kubenswrapper[4780]: E0219 08:46:36.940097 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:46:48 crc kubenswrapper[4780]: I0219 08:46:48.938501 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:46:48 crc kubenswrapper[4780]: E0219 08:46:48.939257 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:47:00 crc kubenswrapper[4780]: I0219 08:47:00.938944 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:47:00 crc kubenswrapper[4780]: E0219 08:47:00.940580 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.397490 4780 scope.go:117] "RemoveContainer" containerID="73d5cbd13217b63c486f4e7d0145f416329fcfa8071559edf5d00c330057800d" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.460454 4780 scope.go:117] "RemoveContainer" containerID="c3921f149371697efdafbcc4de7e9b64c08a711186d72940c711346e6a977776" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.507354 4780 scope.go:117] "RemoveContainer" containerID="6cc78ab8f7b9e9df271b1241208a5165a0e1b133172de580b0941a07a1cbbb55" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.528912 4780 scope.go:117] "RemoveContainer" containerID="c1e464177020365ce834cf4619a79c16908295623f830da258cffc9a74c3a004" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.582082 4780 scope.go:117] "RemoveContainer" containerID="33372ea022a8bd6de99a3d6f15e51d7ba430019ef7b27207983d49036151c801" Feb 19 08:47:02 crc kubenswrapper[4780]: I0219 08:47:02.608578 4780 scope.go:117] "RemoveContainer" containerID="dbbf16158178fa041adbf5533a0fdbe89cd97e4da8fe0830082cd1e4d2ba6f56" Feb 19 08:47:13 crc kubenswrapper[4780]: I0219 08:47:13.938386 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:47:13 crc kubenswrapper[4780]: E0219 08:47:13.939313 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:47:24 crc kubenswrapper[4780]: I0219 08:47:24.938056 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:47:24 crc kubenswrapper[4780]: E0219 08:47:24.939421 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.543518 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:25 crc kubenswrapper[4780]: E0219 08:47:25.544728 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6125ea06-2501-442e-b5b1-d44d92f9e162" containerName="collect-profiles" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.544761 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6125ea06-2501-442e-b5b1-d44d92f9e162" containerName="collect-profiles" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.545058 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6125ea06-2501-442e-b5b1-d44d92f9e162" containerName="collect-profiles" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.547287 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.561434 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.663617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dx6c\" (UniqueName: \"kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.663950 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.664145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.765287 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.765361 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.765416 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dx6c\" (UniqueName: \"kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.766187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.766286 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.789769 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dx6c\" (UniqueName: \"kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c\") pod \"redhat-marketplace-cdxnb\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:25 crc kubenswrapper[4780]: I0219 08:47:25.892417 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:26 crc kubenswrapper[4780]: I0219 08:47:26.351472 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:26 crc kubenswrapper[4780]: I0219 08:47:26.430393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerStarted","Data":"f0492100384dc3fc42a3d0276a9d87224e7b3751fc26e34ef2616f2125c3c7b8"} Feb 19 08:47:27 crc kubenswrapper[4780]: I0219 08:47:27.445866 4780 generic.go:334] "Generic (PLEG): container finished" podID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerID="0f31aa306b5b2e15e0ee1f12864a6bc32d82e9338925a9fa43bb83011bbe7831" exitCode=0 Feb 19 08:47:27 crc kubenswrapper[4780]: I0219 08:47:27.445932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerDied","Data":"0f31aa306b5b2e15e0ee1f12864a6bc32d82e9338925a9fa43bb83011bbe7831"} Feb 19 08:47:27 crc kubenswrapper[4780]: I0219 08:47:27.448811 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:47:28 crc kubenswrapper[4780]: I0219 08:47:28.463354 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerStarted","Data":"c4b37e38b183b2c8a60630d6d86e1f6082206899bd6b9ed858900beca45ddfff"} Feb 19 08:47:29 crc kubenswrapper[4780]: I0219 08:47:29.479220 4780 generic.go:334] "Generic (PLEG): container finished" podID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerID="c4b37e38b183b2c8a60630d6d86e1f6082206899bd6b9ed858900beca45ddfff" exitCode=0 Feb 19 08:47:29 crc kubenswrapper[4780]: I0219 08:47:29.479328 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerDied","Data":"c4b37e38b183b2c8a60630d6d86e1f6082206899bd6b9ed858900beca45ddfff"} Feb 19 08:47:30 crc kubenswrapper[4780]: I0219 08:47:30.492250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerStarted","Data":"abc7fef2fb6fc4bc2928b4fd08f7315ffb5cb387d5ef9febbecf7e82d883ba00"} Feb 19 08:47:30 crc kubenswrapper[4780]: I0219 08:47:30.528280 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cdxnb" podStartSLOduration=3.049883457 podStartE2EDuration="5.528241631s" podCreationTimestamp="2026-02-19 08:47:25 +0000 UTC" firstStartedPulling="2026-02-19 08:47:27.448381928 +0000 UTC m=+1590.192039387" lastFinishedPulling="2026-02-19 08:47:29.926740092 +0000 UTC m=+1592.670397561" observedRunningTime="2026-02-19 08:47:30.521052243 +0000 UTC m=+1593.264709772" watchObservedRunningTime="2026-02-19 08:47:30.528241631 +0000 UTC m=+1593.271899120" Feb 19 08:47:35 crc kubenswrapper[4780]: I0219 08:47:35.893017 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:35 crc kubenswrapper[4780]: I0219 08:47:35.893600 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:35 crc kubenswrapper[4780]: I0219 08:47:35.949539 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:36 crc kubenswrapper[4780]: I0219 08:47:36.594845 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:36 crc kubenswrapper[4780]: I0219 08:47:36.649028 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:38 crc kubenswrapper[4780]: I0219 08:47:38.566094 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cdxnb" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="registry-server" containerID="cri-o://abc7fef2fb6fc4bc2928b4fd08f7315ffb5cb387d5ef9febbecf7e82d883ba00" gracePeriod=2 Feb 19 08:47:38 crc kubenswrapper[4780]: I0219 08:47:38.938715 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:47:38 crc kubenswrapper[4780]: E0219 08:47:38.939152 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.582184 4780 generic.go:334] "Generic (PLEG): container finished" podID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerID="abc7fef2fb6fc4bc2928b4fd08f7315ffb5cb387d5ef9febbecf7e82d883ba00" exitCode=0 Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.582243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerDied","Data":"abc7fef2fb6fc4bc2928b4fd08f7315ffb5cb387d5ef9febbecf7e82d883ba00"} Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.835287 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.901353 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dx6c\" (UniqueName: \"kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c\") pod \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.901466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities\") pod \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.901533 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content\") pod \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\" (UID: \"dfa8e9dd-c85e-4427-b709-d9bc524eb74a\") " Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.902582 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities" (OuterVolumeSpecName: "utilities") pod "dfa8e9dd-c85e-4427-b709-d9bc524eb74a" (UID: "dfa8e9dd-c85e-4427-b709-d9bc524eb74a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.905922 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.909994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c" (OuterVolumeSpecName: "kube-api-access-2dx6c") pod "dfa8e9dd-c85e-4427-b709-d9bc524eb74a" (UID: "dfa8e9dd-c85e-4427-b709-d9bc524eb74a"). InnerVolumeSpecName "kube-api-access-2dx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:47:39 crc kubenswrapper[4780]: I0219 08:47:39.930444 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfa8e9dd-c85e-4427-b709-d9bc524eb74a" (UID: "dfa8e9dd-c85e-4427-b709-d9bc524eb74a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.006962 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dx6c\" (UniqueName: \"kubernetes.io/projected/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-kube-api-access-2dx6c\") on node \"crc\" DevicePath \"\"" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.006991 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfa8e9dd-c85e-4427-b709-d9bc524eb74a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.597284 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdxnb" event={"ID":"dfa8e9dd-c85e-4427-b709-d9bc524eb74a","Type":"ContainerDied","Data":"f0492100384dc3fc42a3d0276a9d87224e7b3751fc26e34ef2616f2125c3c7b8"} Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.597396 4780 scope.go:117] "RemoveContainer" containerID="abc7fef2fb6fc4bc2928b4fd08f7315ffb5cb387d5ef9febbecf7e82d883ba00" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.597331 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdxnb" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.628375 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.634420 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdxnb"] Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.634605 4780 scope.go:117] "RemoveContainer" containerID="c4b37e38b183b2c8a60630d6d86e1f6082206899bd6b9ed858900beca45ddfff" Feb 19 08:47:40 crc kubenswrapper[4780]: I0219 08:47:40.659615 4780 scope.go:117] "RemoveContainer" containerID="0f31aa306b5b2e15e0ee1f12864a6bc32d82e9338925a9fa43bb83011bbe7831" Feb 19 08:47:41 crc kubenswrapper[4780]: I0219 08:47:41.958638 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" path="/var/lib/kubelet/pods/dfa8e9dd-c85e-4427-b709-d9bc524eb74a/volumes" Feb 19 08:47:53 crc kubenswrapper[4780]: I0219 08:47:53.938744 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:47:53 crc kubenswrapper[4780]: E0219 08:47:53.939762 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.756336 4780 scope.go:117] "RemoveContainer" containerID="a0bdeaf3e6cabab71f82d834e0e050541dbbc381af78e6fe80d504a23266934a" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.784748 4780 scope.go:117] "RemoveContainer" containerID="c8136cd50f1a775f65bf2095dc11b1215969e1fb9f8b041ab429d2039ee66443" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.826063 4780 scope.go:117] "RemoveContainer" containerID="8e2d51373786153cd7127ee770c74e56db622810f20d0c53345fc6ffbf410603" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.863963 4780 scope.go:117] "RemoveContainer" containerID="3ff7cf1b87b3928c92dd3bcafc98d39fa7f7f44d629b68dc9b1be48e0e5a72f3" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.910252 4780 scope.go:117] "RemoveContainer" containerID="6bc806239e81869760052d5f53e7b69eac6afe49ce83ac519cb286970ec688dd" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.945026 4780 scope.go:117] "RemoveContainer" containerID="65bcd04f511759d61996cfe8497192b0cf4af2b30ccf58e868bc58f2de65f5fa" Feb 19 08:48:02 crc kubenswrapper[4780]: I0219 08:48:02.969097 4780 scope.go:117] "RemoveContainer" containerID="d30162c6653792d87df8e73905c42561bdf82bb30f675f92b1c1bedff12d66f4" Feb 19 08:48:05 crc kubenswrapper[4780]: I0219 08:48:05.939086 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:48:05 crc kubenswrapper[4780]: E0219 08:48:05.939940 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:48:16 crc kubenswrapper[4780]: I0219 08:48:16.938032 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:48:16 crc kubenswrapper[4780]: E0219 08:48:16.938908 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:48:31 crc kubenswrapper[4780]: I0219 08:48:31.938644 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:48:31 crc kubenswrapper[4780]: E0219 08:48:31.939733 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:48:46 crc kubenswrapper[4780]: I0219 08:48:46.938970 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:48:46 crc kubenswrapper[4780]: E0219 08:48:46.940519 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:00 crc kubenswrapper[4780]: I0219 08:49:00.938314 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:49:00 crc kubenswrapper[4780]: E0219 08:49:00.939583 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.117514 4780 scope.go:117] "RemoveContainer" containerID="c0da2919c8a8269894ab28300296cfe09a550a15aab73746a5abbe2f79a6020e" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.184760 4780 scope.go:117] "RemoveContainer" containerID="b1f8f92c605a74c8e4de483c71a576834ddf5781b144587755d1b657923d5477" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.212603 4780 scope.go:117] "RemoveContainer" containerID="b3f39502442fe07eed7a1a803c209b72c96771a7fcc5a2b9991e435b889f53cf" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.237346 4780 scope.go:117] "RemoveContainer" containerID="1cfc9bf4b6c77d959a0f79e0b6127d18398787d6583e7ce82131bd062a4da946" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.260016 4780 scope.go:117] "RemoveContainer" containerID="3d8bad2c317a4a0b36b6a1817e4a990bff2ebf795779d2abfaa149778ea4cf26" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.305810 4780 scope.go:117] "RemoveContainer" containerID="c1e91a87f73224be9b1e1c661e6be1cc05ace2a3bc8a0c6cc1bf125f0b7a0238" Feb 19 08:49:03 crc kubenswrapper[4780]: I0219 08:49:03.361934 4780 scope.go:117] "RemoveContainer" containerID="11546e606f0ed19c34b297d01479e536a89c87d04b0b835ed462a9e04f3f7c79" Feb 19 08:49:11 crc kubenswrapper[4780]: I0219 08:49:11.939429 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:49:11 crc kubenswrapper[4780]: E0219 08:49:11.940816 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.486573 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:25 crc kubenswrapper[4780]: E0219 08:49:25.490533 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="extract-utilities" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.490601 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="extract-utilities" Feb 19 08:49:25 crc kubenswrapper[4780]: E0219 08:49:25.490637 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="extract-content" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.490656 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="extract-content" Feb 19 08:49:25 crc kubenswrapper[4780]: E0219 08:49:25.490697 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="registry-server" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.490716 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="registry-server" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.491203 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa8e9dd-c85e-4427-b709-d9bc524eb74a" containerName="registry-server" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.493217 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.496522 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.520955 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2nk\" (UniqueName: \"kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.521070 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.521118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.623057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.623171 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.623216 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2nk\" (UniqueName: \"kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.623750 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.623907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.653657 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2nk\" (UniqueName: \"kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk\") pod \"redhat-operators-q4ffv\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:25 crc kubenswrapper[4780]: I0219 08:49:25.823470 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:26 crc kubenswrapper[4780]: I0219 08:49:26.071006 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:26 crc kubenswrapper[4780]: I0219 08:49:26.624202 4780 generic.go:334] "Generic (PLEG): container finished" podID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerID="ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e" exitCode=0 Feb 19 08:49:26 crc kubenswrapper[4780]: I0219 08:49:26.624269 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerDied","Data":"ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e"} Feb 19 08:49:26 crc kubenswrapper[4780]: I0219 08:49:26.624332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerStarted","Data":"4b934f0efcfa3084df9a957bbfd7130728ce91d4b998a3b7a1db0c7237806428"} Feb 19 08:49:26 crc kubenswrapper[4780]: I0219 08:49:26.937756 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:49:26 crc kubenswrapper[4780]: E0219 08:49:26.938229 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:28 crc kubenswrapper[4780]: I0219 08:49:28.644350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerStarted","Data":"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f"} Feb 19 08:49:29 crc kubenswrapper[4780]: I0219 08:49:29.658895 4780 generic.go:334] "Generic (PLEG): container finished" podID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerID="59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f" exitCode=0 Feb 19 08:49:29 crc kubenswrapper[4780]: I0219 08:49:29.659315 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerDied","Data":"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f"} Feb 19 08:49:30 crc kubenswrapper[4780]: I0219 08:49:30.670562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerStarted","Data":"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b"} Feb 19 08:49:30 crc kubenswrapper[4780]: I0219 08:49:30.691099 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q4ffv" podStartSLOduration=2.037332 podStartE2EDuration="5.691080121s" podCreationTimestamp="2026-02-19 08:49:25 +0000 UTC" firstStartedPulling="2026-02-19 08:49:26.626089881 +0000 UTC m=+1709.369747350" lastFinishedPulling="2026-02-19 08:49:30.279837992 +0000 UTC m=+1713.023495471" observedRunningTime="2026-02-19 08:49:30.689610875 +0000 UTC m=+1713.433268354" watchObservedRunningTime="2026-02-19 08:49:30.691080121 +0000 UTC m=+1713.434737570" Feb 19 08:49:35 crc kubenswrapper[4780]: I0219 08:49:35.823804 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:35 crc kubenswrapper[4780]: I0219 08:49:35.825209 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:36 crc kubenswrapper[4780]: I0219 08:49:36.878848 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q4ffv" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="registry-server" probeResult="failure" output=< Feb 19 08:49:36 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 08:49:36 crc kubenswrapper[4780]: > Feb 19 08:49:38 crc kubenswrapper[4780]: I0219 08:49:38.941604 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:49:38 crc kubenswrapper[4780]: E0219 08:49:38.941972 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:45 crc kubenswrapper[4780]: I0219 08:49:45.901993 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:45 crc kubenswrapper[4780]: I0219 08:49:45.984759 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:46 crc kubenswrapper[4780]: I0219 08:49:46.154294 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:47 crc kubenswrapper[4780]: I0219 08:49:47.818040 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q4ffv" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="registry-server" containerID="cri-o://4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b" gracePeriod=2 Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.199519 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.323374 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn2nk\" (UniqueName: \"kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk\") pod \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.323506 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content\") pod \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.323564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities\") pod \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\" (UID: \"1556bc14-cdb3-49e0-80d0-f8fc37bace81\") " Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.325108 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities" (OuterVolumeSpecName: "utilities") pod "1556bc14-cdb3-49e0-80d0-f8fc37bace81" (UID: "1556bc14-cdb3-49e0-80d0-f8fc37bace81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.335299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk" (OuterVolumeSpecName: "kube-api-access-jn2nk") pod "1556bc14-cdb3-49e0-80d0-f8fc37bace81" (UID: "1556bc14-cdb3-49e0-80d0-f8fc37bace81"). InnerVolumeSpecName "kube-api-access-jn2nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.425943 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.426001 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn2nk\" (UniqueName: \"kubernetes.io/projected/1556bc14-cdb3-49e0-80d0-f8fc37bace81-kube-api-access-jn2nk\") on node \"crc\" DevicePath \"\"" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.477942 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1556bc14-cdb3-49e0-80d0-f8fc37bace81" (UID: "1556bc14-cdb3-49e0-80d0-f8fc37bace81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.527077 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1556bc14-cdb3-49e0-80d0-f8fc37bace81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.829378 4780 generic.go:334] "Generic (PLEG): container finished" podID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerID="4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b" exitCode=0 Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.829433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerDied","Data":"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b"} Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.829448 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4ffv" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.829469 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4ffv" event={"ID":"1556bc14-cdb3-49e0-80d0-f8fc37bace81","Type":"ContainerDied","Data":"4b934f0efcfa3084df9a957bbfd7130728ce91d4b998a3b7a1db0c7237806428"} Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.829494 4780 scope.go:117] "RemoveContainer" containerID="4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.880236 4780 scope.go:117] "RemoveContainer" containerID="59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.889221 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.919064 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q4ffv"] Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.927402 4780 scope.go:117] "RemoveContainer" containerID="ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.965863 4780 scope.go:117] "RemoveContainer" containerID="4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b" Feb 19 08:49:48 crc kubenswrapper[4780]: E0219 08:49:48.966779 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b\": container with ID starting with 4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b not found: ID does not exist" containerID="4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.966838 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b"} err="failed to get container status \"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b\": rpc error: code = NotFound desc = could not find container \"4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b\": container with ID starting with 4de7ab60c9333fa4ac24c26df160a70abae2c347e76fadb317e61d70fa2b510b not found: ID does not exist" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.966865 4780 scope.go:117] "RemoveContainer" containerID="59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f" Feb 19 08:49:48 crc kubenswrapper[4780]: E0219 08:49:48.968285 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f\": container with ID starting with 59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f not found: ID does not exist" containerID="59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.968326 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f"} err="failed to get container status \"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f\": rpc error: code = NotFound desc = could not find container \"59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f\": container with ID starting with 59cc21153c88110cd3731d54e59c8e2dcc6359ec17b5fc455b288bc207adea2f not found: ID does not exist" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.968351 4780 scope.go:117] "RemoveContainer" containerID="ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e" Feb 19 08:49:48 crc kubenswrapper[4780]: E0219 08:49:48.969002 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e\": container with ID starting with ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e not found: ID does not exist" containerID="ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e" Feb 19 08:49:48 crc kubenswrapper[4780]: I0219 08:49:48.969074 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e"} err="failed to get container status \"ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e\": rpc error: code = NotFound desc = could not find container \"ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e\": container with ID starting with ffe2c0fa4b132fce40ec60f1b749d9dd9ffde694b436c34d551db6650b007a6e not found: ID does not exist" Feb 19 08:49:49 crc kubenswrapper[4780]: I0219 08:49:49.938926 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:49:49 crc kubenswrapper[4780]: E0219 08:49:49.939519 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:49:49 crc kubenswrapper[4780]: I0219 08:49:49.950968 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" path="/var/lib/kubelet/pods/1556bc14-cdb3-49e0-80d0-f8fc37bace81/volumes" Feb 19 08:50:01 crc kubenswrapper[4780]: I0219 08:50:01.937769 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:50:01 crc kubenswrapper[4780]: E0219 08:50:01.938740 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:50:12 crc kubenswrapper[4780]: I0219 08:50:12.938885 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:50:12 crc kubenswrapper[4780]: E0219 08:50:12.939774 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:50:23 crc kubenswrapper[4780]: I0219 08:50:23.938637 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:50:23 crc kubenswrapper[4780]: E0219 08:50:23.939391 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:50:34 crc kubenswrapper[4780]: I0219 08:50:34.938440 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:50:34 crc kubenswrapper[4780]: E0219 08:50:34.939601 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:50:47 crc kubenswrapper[4780]: I0219 08:50:47.942371 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:50:47 crc kubenswrapper[4780]: E0219 08:50:47.944602 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:51:00 crc kubenswrapper[4780]: I0219 08:51:00.937635 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:51:00 crc kubenswrapper[4780]: E0219 08:51:00.938287 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:51:14 crc kubenswrapper[4780]: I0219 08:51:14.938424 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:51:14 crc kubenswrapper[4780]: E0219 08:51:14.939271 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:51:28 crc kubenswrapper[4780]: I0219 08:51:28.937660 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:51:28 crc kubenswrapper[4780]: E0219 08:51:28.938427 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:51:39 crc kubenswrapper[4780]: I0219 08:51:39.940757 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:51:40 crc kubenswrapper[4780]: I0219 08:51:40.835884 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5"} Feb 19 08:54:06 crc kubenswrapper[4780]: I0219 08:54:06.335987 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:54:06 crc kubenswrapper[4780]: I0219 08:54:06.336554 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.871580 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:13 crc kubenswrapper[4780]: E0219 08:54:13.879253 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="extract-content" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.879311 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="extract-content" Feb 19 08:54:13 crc kubenswrapper[4780]: E0219 08:54:13.879392 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="registry-server" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.879402 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="registry-server" Feb 19 08:54:13 crc kubenswrapper[4780]: E0219 08:54:13.879420 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="extract-utilities" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.879428 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="extract-utilities" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.879711 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1556bc14-cdb3-49e0-80d0-f8fc37bace81" containerName="registry-server" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.880828 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:13 crc kubenswrapper[4780]: I0219 08:54:13.885349 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.050709 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.051317 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2mx\" (UniqueName: \"kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.051637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.079555 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.081314 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.094870 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.153246 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.153653 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2mx\" (UniqueName: \"kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.153802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.153913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.154032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.156949 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47b5\" (UniqueName: \"kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.153804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.154939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.176387 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2mx\" (UniqueName: \"kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx\") pod \"certified-operators-sssjs\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.207945 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.259940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n47b5\" (UniqueName: \"kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.260283 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.260354 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.260960 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.262545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.285174 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47b5\" (UniqueName: \"kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5\") pod \"community-operators-jn5nb\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.407942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.699980 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:14 crc kubenswrapper[4780]: I0219 08:54:14.884320 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:14 crc kubenswrapper[4780]: W0219 08:54:14.893875 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93bf93e_64ec_4067_ac11_0d39782a7c5e.slice/crio-3db3cc6c33507fcef531717f741f2aae6f94bcb6bc8b5425d318085647b0bf45 WatchSource:0}: Error finding container 3db3cc6c33507fcef531717f741f2aae6f94bcb6bc8b5425d318085647b0bf45: Status 404 returned error can't find the container with id 3db3cc6c33507fcef531717f741f2aae6f94bcb6bc8b5425d318085647b0bf45 Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.344750 4780 generic.go:334] "Generic (PLEG): container finished" podID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerID="6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0" exitCode=0 Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.344826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerDied","Data":"6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0"} Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.344915 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerStarted","Data":"3db3cc6c33507fcef531717f741f2aae6f94bcb6bc8b5425d318085647b0bf45"} Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.348277 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.349310 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerID="d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e" exitCode=0 Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.349357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerDied","Data":"d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e"} Feb 19 08:54:15 crc kubenswrapper[4780]: I0219 08:54:15.349387 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerStarted","Data":"375597aace3349bd7bdf32a6b3d9c80466f5d09c04b21e281a65477e9a799644"} Feb 19 08:54:16 crc kubenswrapper[4780]: I0219 08:54:16.360265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerStarted","Data":"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2"} Feb 19 08:54:16 crc kubenswrapper[4780]: I0219 08:54:16.363308 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerStarted","Data":"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a"} Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.375106 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerID="a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2" exitCode=0 Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.375216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerDied","Data":"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2"} Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.381891 4780 generic.go:334] "Generic (PLEG): container finished" podID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerID="2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a" exitCode=0 Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.381938 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerDied","Data":"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a"} Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.381969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerStarted","Data":"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117"} Feb 19 08:54:17 crc kubenswrapper[4780]: I0219 08:54:17.436227 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jn5nb" podStartSLOduration=1.989699008 podStartE2EDuration="3.436206802s" podCreationTimestamp="2026-02-19 08:54:14 +0000 UTC" firstStartedPulling="2026-02-19 08:54:15.347981306 +0000 UTC m=+1998.091638755" lastFinishedPulling="2026-02-19 08:54:16.79448907 +0000 UTC m=+1999.538146549" observedRunningTime="2026-02-19 08:54:17.434469249 +0000 UTC m=+2000.178126708" watchObservedRunningTime="2026-02-19 08:54:17.436206802 +0000 UTC m=+2000.179864251" Feb 19 08:54:18 crc kubenswrapper[4780]: I0219 08:54:18.402452 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerStarted","Data":"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488"} Feb 19 08:54:18 crc kubenswrapper[4780]: I0219 08:54:18.429897 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sssjs" podStartSLOduration=3.004373072 podStartE2EDuration="5.42987652s" podCreationTimestamp="2026-02-19 08:54:13 +0000 UTC" firstStartedPulling="2026-02-19 08:54:15.35137988 +0000 UTC m=+1998.095037329" lastFinishedPulling="2026-02-19 08:54:17.776883318 +0000 UTC m=+2000.520540777" observedRunningTime="2026-02-19 08:54:18.423278997 +0000 UTC m=+2001.166936466" watchObservedRunningTime="2026-02-19 08:54:18.42987652 +0000 UTC m=+2001.173533979" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.210090 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.210602 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.254360 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.408333 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.408644 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.458318 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.509519 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:24 crc kubenswrapper[4780]: I0219 08:54:24.520268 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:26 crc kubenswrapper[4780]: I0219 08:54:26.293470 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:26 crc kubenswrapper[4780]: I0219 08:54:26.482803 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sssjs" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="registry-server" containerID="cri-o://a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488" gracePeriod=2 Feb 19 08:54:26 crc kubenswrapper[4780]: I0219 08:54:26.902265 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.027938 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.145786 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content\") pod \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.146456 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities\") pod \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.146538 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh2mx\" (UniqueName: \"kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx\") pod \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\" (UID: \"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe\") " Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.148186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities" (OuterVolumeSpecName: "utilities") pod "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" (UID: "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.154496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx" (OuterVolumeSpecName: "kube-api-access-hh2mx") pod "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" (UID: "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe"). InnerVolumeSpecName "kube-api-access-hh2mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.235916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" (UID: "f4e2383d-aede-43fc-9d0d-c252c6c3f6fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.249077 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.249163 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.249193 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh2mx\" (UniqueName: \"kubernetes.io/projected/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe-kube-api-access-hh2mx\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.489526 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerID="a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488" exitCode=0 Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.489597 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sssjs" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.489630 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerDied","Data":"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488"} Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.489683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sssjs" event={"ID":"f4e2383d-aede-43fc-9d0d-c252c6c3f6fe","Type":"ContainerDied","Data":"375597aace3349bd7bdf32a6b3d9c80466f5d09c04b21e281a65477e9a799644"} Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.489705 4780 scope.go:117] "RemoveContainer" containerID="a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.490177 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jn5nb" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="registry-server" containerID="cri-o://7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117" gracePeriod=2 Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.506669 4780 scope.go:117] "RemoveContainer" containerID="a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.522682 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.537054 4780 scope.go:117] "RemoveContainer" containerID="d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.547425 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sssjs"] Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.613370 4780 scope.go:117] "RemoveContainer" containerID="a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488" Feb 19 08:54:27 crc kubenswrapper[4780]: E0219 08:54:27.614560 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488\": container with ID starting with a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488 not found: ID does not exist" containerID="a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.614606 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488"} err="failed to get container status \"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488\": rpc error: code = NotFound desc = could not find container \"a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488\": container with ID starting with a907f091c9d6ec0bd818d31b453e461525b4e7d73b169e5192b6edbd1831d488 not found: ID does not exist" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.614632 4780 scope.go:117] "RemoveContainer" containerID="a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2" Feb 19 08:54:27 crc kubenswrapper[4780]: E0219 08:54:27.616489 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2\": container with ID starting with a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2 not found: ID does not exist" containerID="a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.616543 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2"} err="failed to get container status \"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2\": rpc error: code = NotFound desc = could not find container \"a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2\": container with ID starting with a75776e55be5e1a3d2f07aa86ccd410c77469f77221658321152aff32ddb87f2 not found: ID does not exist" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.616574 4780 scope.go:117] "RemoveContainer" containerID="d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e" Feb 19 08:54:27 crc kubenswrapper[4780]: E0219 08:54:27.617136 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e\": container with ID starting with d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e not found: ID does not exist" containerID="d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.617167 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e"} err="failed to get container status \"d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e\": rpc error: code = NotFound desc = could not find container \"d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e\": container with ID starting with d6f49ba134c7da9c0e89a026a49e82e3fe8ed82767197c7bb50328124a57893e not found: ID does not exist" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.878252 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:27 crc kubenswrapper[4780]: I0219 08:54:27.952481 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" path="/var/lib/kubelet/pods/f4e2383d-aede-43fc-9d0d-c252c6c3f6fe/volumes" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.060876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content\") pod \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.060976 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n47b5\" (UniqueName: \"kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5\") pod \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.061086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities\") pod \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\" (UID: \"d93bf93e-64ec-4067-ac11-0d39782a7c5e\") " Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.061970 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities" (OuterVolumeSpecName: "utilities") pod "d93bf93e-64ec-4067-ac11-0d39782a7c5e" (UID: "d93bf93e-64ec-4067-ac11-0d39782a7c5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.063788 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5" (OuterVolumeSpecName: "kube-api-access-n47b5") pod "d93bf93e-64ec-4067-ac11-0d39782a7c5e" (UID: "d93bf93e-64ec-4067-ac11-0d39782a7c5e"). InnerVolumeSpecName "kube-api-access-n47b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.142078 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d93bf93e-64ec-4067-ac11-0d39782a7c5e" (UID: "d93bf93e-64ec-4067-ac11-0d39782a7c5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.163502 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.163535 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n47b5\" (UniqueName: \"kubernetes.io/projected/d93bf93e-64ec-4067-ac11-0d39782a7c5e-kube-api-access-n47b5\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.163547 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93bf93e-64ec-4067-ac11-0d39782a7c5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.502889 4780 generic.go:334] "Generic (PLEG): container finished" podID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerID="7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117" exitCode=0 Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.502997 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn5nb" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.503032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerDied","Data":"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117"} Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.503194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn5nb" event={"ID":"d93bf93e-64ec-4067-ac11-0d39782a7c5e","Type":"ContainerDied","Data":"3db3cc6c33507fcef531717f741f2aae6f94bcb6bc8b5425d318085647b0bf45"} Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.503240 4780 scope.go:117] "RemoveContainer" containerID="7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.530173 4780 scope.go:117] "RemoveContainer" containerID="2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.535997 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.541053 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jn5nb"] Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.562311 4780 scope.go:117] "RemoveContainer" containerID="6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.588701 4780 scope.go:117] "RemoveContainer" containerID="7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117" Feb 19 08:54:28 crc kubenswrapper[4780]: E0219 08:54:28.589154 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117\": container with ID starting with 7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117 not found: ID does not exist" containerID="7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.589214 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117"} err="failed to get container status \"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117\": rpc error: code = NotFound desc = could not find container \"7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117\": container with ID starting with 7da846c7433295327c9982bba909acd0ff2a433c0131340057413ac196b3f117 not found: ID does not exist" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.589245 4780 scope.go:117] "RemoveContainer" containerID="2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a" Feb 19 08:54:28 crc kubenswrapper[4780]: E0219 08:54:28.589864 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a\": container with ID starting with 2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a not found: ID does not exist" containerID="2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.589900 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a"} err="failed to get container status \"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a\": rpc error: code = NotFound desc = could not find container \"2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a\": container with ID starting with 2041c105be996a8979fc8eaa33d573f095128def9ced5e33581bcec903ba286a not found: ID does not exist" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.589928 4780 scope.go:117] "RemoveContainer" containerID="6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0" Feb 19 08:54:28 crc kubenswrapper[4780]: E0219 08:54:28.590208 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0\": container with ID starting with 6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0 not found: ID does not exist" containerID="6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0" Feb 19 08:54:28 crc kubenswrapper[4780]: I0219 08:54:28.590233 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0"} err="failed to get container status \"6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0\": rpc error: code = NotFound desc = could not find container \"6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0\": container with ID starting with 6f5f204a9be6a750d6b6260c0494b9c28eb2ca6ff017ce6e6aa5218665dc3dd0 not found: ID does not exist" Feb 19 08:54:29 crc kubenswrapper[4780]: I0219 08:54:29.946543 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" path="/var/lib/kubelet/pods/d93bf93e-64ec-4067-ac11-0d39782a7c5e/volumes" Feb 19 08:54:36 crc kubenswrapper[4780]: I0219 08:54:36.336686 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:54:36 crc kubenswrapper[4780]: I0219 08:54:36.337349 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.336314 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.338823 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.339064 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.339960 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.340263 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5" gracePeriod=600 Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.866080 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5" exitCode=0 Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.866268 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5"} Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.866614 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85"} Feb 19 08:55:06 crc kubenswrapper[4780]: I0219 08:55:06.866663 4780 scope.go:117] "RemoveContainer" containerID="6af29061dbece4dce287cc3cf0835427c774348574245056c5e1fdb62aca8a75" Feb 19 08:57:06 crc kubenswrapper[4780]: I0219 08:57:06.336488 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:57:06 crc kubenswrapper[4780]: I0219 08:57:06.336924 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:57:36 crc kubenswrapper[4780]: I0219 08:57:36.336290 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:57:36 crc kubenswrapper[4780]: I0219 08:57:36.336909 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:58:06 crc kubenswrapper[4780]: I0219 08:58:06.336849 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:58:06 crc kubenswrapper[4780]: I0219 08:58:06.337643 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:58:06 crc kubenswrapper[4780]: I0219 08:58:06.337716 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 08:58:06 crc kubenswrapper[4780]: I0219 08:58:06.338642 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:58:06 crc kubenswrapper[4780]: I0219 08:58:06.338705 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" gracePeriod=600 Feb 19 08:58:06 crc kubenswrapper[4780]: E0219 08:58:06.469075 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:58:07 crc kubenswrapper[4780]: I0219 08:58:07.454746 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" exitCode=0 Feb 19 08:58:07 crc kubenswrapper[4780]: I0219 08:58:07.454818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85"} Feb 19 08:58:07 crc kubenswrapper[4780]: I0219 08:58:07.454914 4780 scope.go:117] "RemoveContainer" containerID="68ab9e4ff105ccc552b076bb131d9a7534e48db4821a9b77ef15f425a9ec2cf5" Feb 19 08:58:07 crc kubenswrapper[4780]: I0219 08:58:07.455875 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:58:07 crc kubenswrapper[4780]: E0219 08:58:07.456289 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:58:22 crc kubenswrapper[4780]: I0219 08:58:22.938354 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:58:22 crc kubenswrapper[4780]: E0219 08:58:22.939201 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:58:35 crc kubenswrapper[4780]: I0219 08:58:35.938616 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:58:35 crc kubenswrapper[4780]: E0219 08:58:35.939582 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.331123 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334274 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="extract-content" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334290 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="extract-content" Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334307 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="extract-utilities" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334314 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="extract-utilities" Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334331 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334340 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334353 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="extract-content" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334360 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="extract-content" Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334380 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="extract-utilities" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334387 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="extract-utilities" Feb 19 08:58:45 crc kubenswrapper[4780]: E0219 08:58:45.334399 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334406 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334579 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e2383d-aede-43fc-9d0d-c252c6c3f6fe" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.334597 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93bf93e-64ec-4067-ac11-0d39782a7c5e" containerName="registry-server" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.335848 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.345891 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.387831 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.387949 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.388231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2x5j\" (UniqueName: \"kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.489879 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.490229 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2x5j\" (UniqueName: \"kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.490353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.490804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.491149 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.516671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2x5j\" (UniqueName: \"kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j\") pod \"redhat-marketplace-th5s4\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:45 crc kubenswrapper[4780]: I0219 08:58:45.654179 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:46 crc kubenswrapper[4780]: I0219 08:58:46.038100 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:46 crc kubenswrapper[4780]: I0219 08:58:46.869565 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe4b1692-45c0-460d-ac24-de767634b109" containerID="d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038" exitCode=0 Feb 19 08:58:46 crc kubenswrapper[4780]: I0219 08:58:46.869674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerDied","Data":"d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038"} Feb 19 08:58:46 crc kubenswrapper[4780]: I0219 08:58:46.871529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerStarted","Data":"5fafbb64462bd37b62929b8ea5f7d853320d17988adc31527a45d7df7de24aaf"} Feb 19 08:58:47 crc kubenswrapper[4780]: I0219 08:58:47.879983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerStarted","Data":"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e"} Feb 19 08:58:48 crc kubenswrapper[4780]: I0219 08:58:48.891791 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe4b1692-45c0-460d-ac24-de767634b109" containerID="f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e" exitCode=0 Feb 19 08:58:48 crc kubenswrapper[4780]: I0219 08:58:48.891865 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerDied","Data":"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e"} Feb 19 08:58:49 crc kubenswrapper[4780]: I0219 08:58:49.903667 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerStarted","Data":"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a"} Feb 19 08:58:49 crc kubenswrapper[4780]: I0219 08:58:49.930053 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-th5s4" podStartSLOduration=2.497495942 podStartE2EDuration="4.930029337s" podCreationTimestamp="2026-02-19 08:58:45 +0000 UTC" firstStartedPulling="2026-02-19 08:58:46.873629481 +0000 UTC m=+2269.617286940" lastFinishedPulling="2026-02-19 08:58:49.306162886 +0000 UTC m=+2272.049820335" observedRunningTime="2026-02-19 08:58:49.922822269 +0000 UTC m=+2272.666479728" watchObservedRunningTime="2026-02-19 08:58:49.930029337 +0000 UTC m=+2272.673686796" Feb 19 08:58:49 crc kubenswrapper[4780]: I0219 08:58:49.942311 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:58:49 crc kubenswrapper[4780]: E0219 08:58:49.942507 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:58:55 crc kubenswrapper[4780]: I0219 08:58:55.654486 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:55 crc kubenswrapper[4780]: I0219 08:58:55.655009 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:55 crc kubenswrapper[4780]: I0219 08:58:55.702352 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:56 crc kubenswrapper[4780]: I0219 08:58:56.027953 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:56 crc kubenswrapper[4780]: I0219 08:58:56.101670 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:57 crc kubenswrapper[4780]: I0219 08:58:57.972269 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-th5s4" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="registry-server" containerID="cri-o://77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a" gracePeriod=2 Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.400201 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.564077 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content\") pod \"fe4b1692-45c0-460d-ac24-de767634b109\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.564220 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2x5j\" (UniqueName: \"kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j\") pod \"fe4b1692-45c0-460d-ac24-de767634b109\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.564267 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities\") pod \"fe4b1692-45c0-460d-ac24-de767634b109\" (UID: \"fe4b1692-45c0-460d-ac24-de767634b109\") " Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.565817 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities" (OuterVolumeSpecName: "utilities") pod "fe4b1692-45c0-460d-ac24-de767634b109" (UID: "fe4b1692-45c0-460d-ac24-de767634b109"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.580828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j" (OuterVolumeSpecName: "kube-api-access-j2x5j") pod "fe4b1692-45c0-460d-ac24-de767634b109" (UID: "fe4b1692-45c0-460d-ac24-de767634b109"). InnerVolumeSpecName "kube-api-access-j2x5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.667066 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2x5j\" (UniqueName: \"kubernetes.io/projected/fe4b1692-45c0-460d-ac24-de767634b109-kube-api-access-j2x5j\") on node \"crc\" DevicePath \"\"" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.667113 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.887903 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe4b1692-45c0-460d-ac24-de767634b109" (UID: "fe4b1692-45c0-460d-ac24-de767634b109"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.971133 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4b1692-45c0-460d-ac24-de767634b109-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.981437 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe4b1692-45c0-460d-ac24-de767634b109" containerID="77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a" exitCode=0 Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.981506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerDied","Data":"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a"} Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.981518 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th5s4" Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.981563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th5s4" event={"ID":"fe4b1692-45c0-460d-ac24-de767634b109","Type":"ContainerDied","Data":"5fafbb64462bd37b62929b8ea5f7d853320d17988adc31527a45d7df7de24aaf"} Feb 19 08:58:58 crc kubenswrapper[4780]: I0219 08:58:58.981601 4780 scope.go:117] "RemoveContainer" containerID="77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.005065 4780 scope.go:117] "RemoveContainer" containerID="f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.028191 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.037666 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-th5s4"] Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.055991 4780 scope.go:117] "RemoveContainer" containerID="d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.078061 4780 scope.go:117] "RemoveContainer" containerID="77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a" Feb 19 08:58:59 crc kubenswrapper[4780]: E0219 08:58:59.078833 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a\": container with ID starting with 77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a not found: ID does not exist" containerID="77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.078896 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a"} err="failed to get container status \"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a\": rpc error: code = NotFound desc = could not find container \"77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a\": container with ID starting with 77bb29ba2ac3d811678af26e5b032e4d4b532dfed76293e17e221e03f3bc3a6a not found: ID does not exist" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.078941 4780 scope.go:117] "RemoveContainer" containerID="f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e" Feb 19 08:58:59 crc kubenswrapper[4780]: E0219 08:58:59.079474 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e\": container with ID starting with f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e not found: ID does not exist" containerID="f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.079501 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e"} err="failed to get container status \"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e\": rpc error: code = NotFound desc = could not find container \"f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e\": container with ID starting with f041d090c3c315d48acd7bc40eee0038c45d8a54b6c488a82f482d061d87fb7e not found: ID does not exist" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.079520 4780 scope.go:117] "RemoveContainer" containerID="d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038" Feb 19 08:58:59 crc kubenswrapper[4780]: E0219 08:58:59.079822 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038\": container with ID starting with d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038 not found: ID does not exist" containerID="d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.079851 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038"} err="failed to get container status \"d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038\": rpc error: code = NotFound desc = could not find container \"d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038\": container with ID starting with d4d3bc3f33d09cad56af21f21ff5bd8947187dbec4f2de9aee863fa94e583038 not found: ID does not exist" Feb 19 08:58:59 crc kubenswrapper[4780]: I0219 08:58:59.952753 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4b1692-45c0-460d-ac24-de767634b109" path="/var/lib/kubelet/pods/fe4b1692-45c0-460d-ac24-de767634b109/volumes" Feb 19 08:59:03 crc kubenswrapper[4780]: I0219 08:59:03.938106 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:59:03 crc kubenswrapper[4780]: E0219 08:59:03.940744 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:59:14 crc kubenswrapper[4780]: I0219 08:59:14.939479 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:59:14 crc kubenswrapper[4780]: E0219 08:59:14.940435 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:59:26 crc kubenswrapper[4780]: I0219 08:59:26.938498 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:59:26 crc kubenswrapper[4780]: E0219 08:59:26.939734 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:59:39 crc kubenswrapper[4780]: I0219 08:59:39.938106 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:59:39 crc kubenswrapper[4780]: E0219 08:59:39.939092 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 08:59:53 crc kubenswrapper[4780]: I0219 08:59:53.939576 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 08:59:53 crc kubenswrapper[4780]: E0219 08:59:53.940937 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.152117 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp"] Feb 19 09:00:00 crc kubenswrapper[4780]: E0219 09:00:00.152897 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="registry-server" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.152910 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="registry-server" Feb 19 09:00:00 crc kubenswrapper[4780]: E0219 09:00:00.152929 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="extract-utilities" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.152935 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="extract-utilities" Feb 19 09:00:00 crc kubenswrapper[4780]: E0219 09:00:00.152949 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="extract-content" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.152956 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="extract-content" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.153072 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4b1692-45c0-460d-ac24-de767634b109" containerName="registry-server" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.153677 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.155516 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.160561 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp"] Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.164284 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.191093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwstw\" (UniqueName: \"kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.191266 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.191427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.292869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwstw\" (UniqueName: \"kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.292985 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.293153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.294073 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.302823 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.314360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwstw\" (UniqueName: \"kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw\") pod \"collect-profiles-29524860-h9kjp\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.479174 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:00 crc kubenswrapper[4780]: I0219 09:00:00.740266 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp"] Feb 19 09:00:01 crc kubenswrapper[4780]: I0219 09:00:01.525288 4780 generic.go:334] "Generic (PLEG): container finished" podID="dca03495-9de0-43ec-b0bf-11dfc5fc8d70" containerID="2b1efe93d2b2279b1ea2ec4e9cd075dd6b94cd6877f0422f936c61c7723049aa" exitCode=0 Feb 19 09:00:01 crc kubenswrapper[4780]: I0219 09:00:01.525424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" event={"ID":"dca03495-9de0-43ec-b0bf-11dfc5fc8d70","Type":"ContainerDied","Data":"2b1efe93d2b2279b1ea2ec4e9cd075dd6b94cd6877f0422f936c61c7723049aa"} Feb 19 09:00:01 crc kubenswrapper[4780]: I0219 09:00:01.526006 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" event={"ID":"dca03495-9de0-43ec-b0bf-11dfc5fc8d70","Type":"ContainerStarted","Data":"4696989b569775a7cfbb75c71b688a8cbb25d4ffff0d79a3658205f6a85034f4"} Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.869862 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.934863 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwstw\" (UniqueName: \"kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw\") pod \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.935193 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume\") pod \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.935455 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume\") pod \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\" (UID: \"dca03495-9de0-43ec-b0bf-11dfc5fc8d70\") " Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.936199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume" (OuterVolumeSpecName: "config-volume") pod "dca03495-9de0-43ec-b0bf-11dfc5fc8d70" (UID: "dca03495-9de0-43ec-b0bf-11dfc5fc8d70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.941240 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw" (OuterVolumeSpecName: "kube-api-access-wwstw") pod "dca03495-9de0-43ec-b0bf-11dfc5fc8d70" (UID: "dca03495-9de0-43ec-b0bf-11dfc5fc8d70"). InnerVolumeSpecName "kube-api-access-wwstw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:00:02 crc kubenswrapper[4780]: I0219 09:00:02.943046 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dca03495-9de0-43ec-b0bf-11dfc5fc8d70" (UID: "dca03495-9de0-43ec-b0bf-11dfc5fc8d70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.037102 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.037153 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwstw\" (UniqueName: \"kubernetes.io/projected/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-kube-api-access-wwstw\") on node \"crc\" DevicePath \"\"" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.037164 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dca03495-9de0-43ec-b0bf-11dfc5fc8d70-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.549344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" event={"ID":"dca03495-9de0-43ec-b0bf-11dfc5fc8d70","Type":"ContainerDied","Data":"4696989b569775a7cfbb75c71b688a8cbb25d4ffff0d79a3658205f6a85034f4"} Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.549412 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.549415 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4696989b569775a7cfbb75c71b688a8cbb25d4ffff0d79a3658205f6a85034f4" Feb 19 09:00:03 crc kubenswrapper[4780]: I0219 09:00:03.991738 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg"] Feb 19 09:00:04 crc kubenswrapper[4780]: I0219 09:00:03.999970 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-wr5xg"] Feb 19 09:00:05 crc kubenswrapper[4780]: I0219 09:00:05.947984 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8433aeaf-86c7-4f3a-b2c2-3e402450ee89" path="/var/lib/kubelet/pods/8433aeaf-86c7-4f3a-b2c2-3e402450ee89/volumes" Feb 19 09:00:06 crc kubenswrapper[4780]: I0219 09:00:06.938581 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:00:06 crc kubenswrapper[4780]: E0219 09:00:06.938953 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:19 crc kubenswrapper[4780]: I0219 09:00:19.938515 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:00:19 crc kubenswrapper[4780]: E0219 09:00:19.939495 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:30 crc kubenswrapper[4780]: I0219 09:00:30.938620 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:00:30 crc kubenswrapper[4780]: E0219 09:00:30.939379 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:41 crc kubenswrapper[4780]: I0219 09:00:41.937655 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:00:41 crc kubenswrapper[4780]: E0219 09:00:41.938406 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:50 crc kubenswrapper[4780]: I0219 09:00:50.932016 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:00:50 crc kubenswrapper[4780]: E0219 09:00:50.937254 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca03495-9de0-43ec-b0bf-11dfc5fc8d70" containerName="collect-profiles" Feb 19 09:00:50 crc kubenswrapper[4780]: I0219 09:00:50.937419 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca03495-9de0-43ec-b0bf-11dfc5fc8d70" containerName="collect-profiles" Feb 19 09:00:50 crc kubenswrapper[4780]: I0219 09:00:50.937714 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca03495-9de0-43ec-b0bf-11dfc5fc8d70" containerName="collect-profiles" Feb 19 09:00:50 crc kubenswrapper[4780]: I0219 09:00:50.944471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:50 crc kubenswrapper[4780]: I0219 09:00:50.963001 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.004242 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.004594 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzf27\" (UniqueName: \"kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.004731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.106460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.107026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.107168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzf27\" (UniqueName: \"kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.107243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.107682 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.140329 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzf27\" (UniqueName: \"kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27\") pod \"redhat-operators-c97qh\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.281789 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.717077 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.963693 4780 generic.go:334] "Generic (PLEG): container finished" podID="4af4177d-60d3-4465-9048-61e361f9017d" containerID="0770f0c853749d9557202a4363b7b6f995f1071df2791a8515a00838afa7d64d" exitCode=0 Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.963728 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerDied","Data":"0770f0c853749d9557202a4363b7b6f995f1071df2791a8515a00838afa7d64d"} Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.963751 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerStarted","Data":"a9d493b8f42d7353be2599390c050422e20eb926cc0feab2c0cfb3d698a8b2e7"} Feb 19 09:00:51 crc kubenswrapper[4780]: I0219 09:00:51.965154 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:00:52 crc kubenswrapper[4780]: I0219 09:00:52.951792 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:00:52 crc kubenswrapper[4780]: E0219 09:00:52.952549 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:00:53 crc kubenswrapper[4780]: I0219 09:00:53.984540 4780 generic.go:334] "Generic (PLEG): container finished" podID="4af4177d-60d3-4465-9048-61e361f9017d" containerID="1b5faede253b3052f99485fb810efeaa8ae526aa6dc167bcf8fc0b8e6107c517" exitCode=0 Feb 19 09:00:53 crc kubenswrapper[4780]: I0219 09:00:53.984651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerDied","Data":"1b5faede253b3052f99485fb810efeaa8ae526aa6dc167bcf8fc0b8e6107c517"} Feb 19 09:00:54 crc kubenswrapper[4780]: I0219 09:00:54.999043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerStarted","Data":"0189f786ce117d0c8e349b94de8cf41ea8f86456ae44e3f251a898eee956a38a"} Feb 19 09:00:55 crc kubenswrapper[4780]: I0219 09:00:55.032315 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c97qh" podStartSLOduration=2.52294503 podStartE2EDuration="5.032293474s" podCreationTimestamp="2026-02-19 09:00:50 +0000 UTC" firstStartedPulling="2026-02-19 09:00:51.964856915 +0000 UTC m=+2394.708514374" lastFinishedPulling="2026-02-19 09:00:54.474205359 +0000 UTC m=+2397.217862818" observedRunningTime="2026-02-19 09:00:55.027744261 +0000 UTC m=+2397.771401720" watchObservedRunningTime="2026-02-19 09:00:55.032293474 +0000 UTC m=+2397.775950933" Feb 19 09:01:01 crc kubenswrapper[4780]: I0219 09:01:01.282142 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:01 crc kubenswrapper[4780]: I0219 09:01:01.282621 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:01 crc kubenswrapper[4780]: I0219 09:01:01.328847 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:02 crc kubenswrapper[4780]: I0219 09:01:02.098166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:02 crc kubenswrapper[4780]: I0219 09:01:02.152664 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:01:03 crc kubenswrapper[4780]: I0219 09:01:03.692632 4780 scope.go:117] "RemoveContainer" containerID="de3197054d9a29ecfa71e93917d676aa0c275566f0a451657a1a3a949a44dc66" Feb 19 09:01:04 crc kubenswrapper[4780]: I0219 09:01:04.063255 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c97qh" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="registry-server" containerID="cri-o://0189f786ce117d0c8e349b94de8cf41ea8f86456ae44e3f251a898eee956a38a" gracePeriod=2 Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.082421 4780 generic.go:334] "Generic (PLEG): container finished" podID="4af4177d-60d3-4465-9048-61e361f9017d" containerID="0189f786ce117d0c8e349b94de8cf41ea8f86456ae44e3f251a898eee956a38a" exitCode=0 Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.082555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerDied","Data":"0189f786ce117d0c8e349b94de8cf41ea8f86456ae44e3f251a898eee956a38a"} Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.281237 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.356817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content\") pod \"4af4177d-60d3-4465-9048-61e361f9017d\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.356882 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzf27\" (UniqueName: \"kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27\") pod \"4af4177d-60d3-4465-9048-61e361f9017d\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.356958 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities\") pod \"4af4177d-60d3-4465-9048-61e361f9017d\" (UID: \"4af4177d-60d3-4465-9048-61e361f9017d\") " Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.358192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities" (OuterVolumeSpecName: "utilities") pod "4af4177d-60d3-4465-9048-61e361f9017d" (UID: "4af4177d-60d3-4465-9048-61e361f9017d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.362032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27" (OuterVolumeSpecName: "kube-api-access-nzf27") pod "4af4177d-60d3-4465-9048-61e361f9017d" (UID: "4af4177d-60d3-4465-9048-61e361f9017d"). InnerVolumeSpecName "kube-api-access-nzf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.458353 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.458390 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzf27\" (UniqueName: \"kubernetes.io/projected/4af4177d-60d3-4465-9048-61e361f9017d-kube-api-access-nzf27\") on node \"crc\" DevicePath \"\"" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.485541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4af4177d-60d3-4465-9048-61e361f9017d" (UID: "4af4177d-60d3-4465-9048-61e361f9017d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:01:06 crc kubenswrapper[4780]: I0219 09:01:06.559065 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af4177d-60d3-4465-9048-61e361f9017d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.096668 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c97qh" event={"ID":"4af4177d-60d3-4465-9048-61e361f9017d","Type":"ContainerDied","Data":"a9d493b8f42d7353be2599390c050422e20eb926cc0feab2c0cfb3d698a8b2e7"} Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.096756 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c97qh" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.096822 4780 scope.go:117] "RemoveContainer" containerID="0189f786ce117d0c8e349b94de8cf41ea8f86456ae44e3f251a898eee956a38a" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.145565 4780 scope.go:117] "RemoveContainer" containerID="1b5faede253b3052f99485fb810efeaa8ae526aa6dc167bcf8fc0b8e6107c517" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.160554 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.168890 4780 scope.go:117] "RemoveContainer" containerID="0770f0c853749d9557202a4363b7b6f995f1071df2791a8515a00838afa7d64d" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.170753 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c97qh"] Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.937892 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:01:07 crc kubenswrapper[4780]: E0219 09:01:07.938360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:01:07 crc kubenswrapper[4780]: I0219 09:01:07.949403 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af4177d-60d3-4465-9048-61e361f9017d" path="/var/lib/kubelet/pods/4af4177d-60d3-4465-9048-61e361f9017d/volumes" Feb 19 09:01:22 crc kubenswrapper[4780]: I0219 09:01:22.939166 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:01:22 crc kubenswrapper[4780]: E0219 09:01:22.940179 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:01:34 crc kubenswrapper[4780]: I0219 09:01:34.938100 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:01:34 crc kubenswrapper[4780]: E0219 09:01:34.939163 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:01:48 crc kubenswrapper[4780]: I0219 09:01:48.939513 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:01:48 crc kubenswrapper[4780]: E0219 09:01:48.940789 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:01:59 crc kubenswrapper[4780]: I0219 09:01:59.938950 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:01:59 crc kubenswrapper[4780]: E0219 09:01:59.940507 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:02:12 crc kubenswrapper[4780]: I0219 09:02:12.937919 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:02:12 crc kubenswrapper[4780]: E0219 09:02:12.938441 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:02:25 crc kubenswrapper[4780]: I0219 09:02:25.938600 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:02:25 crc kubenswrapper[4780]: E0219 09:02:25.939160 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:02:37 crc kubenswrapper[4780]: I0219 09:02:37.947379 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:02:37 crc kubenswrapper[4780]: E0219 09:02:37.948811 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:02:52 crc kubenswrapper[4780]: I0219 09:02:52.937791 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:02:52 crc kubenswrapper[4780]: E0219 09:02:52.938497 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:03:03 crc kubenswrapper[4780]: I0219 09:03:03.938496 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:03:03 crc kubenswrapper[4780]: E0219 09:03:03.941227 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:03:15 crc kubenswrapper[4780]: I0219 09:03:15.939485 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:03:17 crc kubenswrapper[4780]: I0219 09:03:17.000277 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f"} Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.672735 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:04:53 crc kubenswrapper[4780]: E0219 09:04:53.675074 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="extract-utilities" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.675116 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="extract-utilities" Feb 19 09:04:53 crc kubenswrapper[4780]: E0219 09:04:53.675144 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="extract-content" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.675151 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="extract-content" Feb 19 09:04:53 crc kubenswrapper[4780]: E0219 09:04:53.675180 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="registry-server" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.675186 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="registry-server" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.675397 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af4177d-60d3-4465-9048-61e361f9017d" containerName="registry-server" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.676602 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.685922 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.862036 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.862409 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.862483 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjp9\" (UniqueName: \"kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.963466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.963559 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.963787 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjp9\" (UniqueName: \"kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.964263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.964303 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:53 crc kubenswrapper[4780]: I0219 09:04:53.994633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjp9\" (UniqueName: \"kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9\") pod \"community-operators-7gbv8\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:54 crc kubenswrapper[4780]: I0219 09:04:54.056559 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:04:54 crc kubenswrapper[4780]: I0219 09:04:54.477737 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:04:54 crc kubenswrapper[4780]: I0219 09:04:54.757333 4780 generic.go:334] "Generic (PLEG): container finished" podID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerID="85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4" exitCode=0 Feb 19 09:04:54 crc kubenswrapper[4780]: I0219 09:04:54.757394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerDied","Data":"85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4"} Feb 19 09:04:54 crc kubenswrapper[4780]: I0219 09:04:54.757659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerStarted","Data":"4fa8e5e2e397cd8f384917d1d2e22110257fe7a920532a289f34622669840de6"} Feb 19 09:04:55 crc kubenswrapper[4780]: I0219 09:04:55.766233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerStarted","Data":"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6"} Feb 19 09:04:56 crc kubenswrapper[4780]: I0219 09:04:56.779020 4780 generic.go:334] "Generic (PLEG): container finished" podID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerID="3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6" exitCode=0 Feb 19 09:04:56 crc kubenswrapper[4780]: I0219 09:04:56.779091 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerDied","Data":"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6"} Feb 19 09:04:57 crc kubenswrapper[4780]: I0219 09:04:57.790505 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerStarted","Data":"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e"} Feb 19 09:04:57 crc kubenswrapper[4780]: I0219 09:04:57.816253 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gbv8" podStartSLOduration=2.275346203 podStartE2EDuration="4.816228658s" podCreationTimestamp="2026-02-19 09:04:53 +0000 UTC" firstStartedPulling="2026-02-19 09:04:54.759297078 +0000 UTC m=+2637.502954547" lastFinishedPulling="2026-02-19 09:04:57.300179553 +0000 UTC m=+2640.043837002" observedRunningTime="2026-02-19 09:04:57.812766703 +0000 UTC m=+2640.556424172" watchObservedRunningTime="2026-02-19 09:04:57.816228658 +0000 UTC m=+2640.559886107" Feb 19 09:05:04 crc kubenswrapper[4780]: I0219 09:05:04.057319 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:04 crc kubenswrapper[4780]: I0219 09:05:04.057759 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:04 crc kubenswrapper[4780]: I0219 09:05:04.119697 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:04 crc kubenswrapper[4780]: I0219 09:05:04.884923 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:04 crc kubenswrapper[4780]: I0219 09:05:04.935223 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:05:06 crc kubenswrapper[4780]: I0219 09:05:06.861102 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gbv8" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="registry-server" containerID="cri-o://22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e" gracePeriod=2 Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.251761 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.267780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content\") pod \"d6e5bd49-0930-4416-8e01-579ca38b6bad\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.267832 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnjp9\" (UniqueName: \"kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9\") pod \"d6e5bd49-0930-4416-8e01-579ca38b6bad\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.267882 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities\") pod \"d6e5bd49-0930-4416-8e01-579ca38b6bad\" (UID: \"d6e5bd49-0930-4416-8e01-579ca38b6bad\") " Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.268994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities" (OuterVolumeSpecName: "utilities") pod "d6e5bd49-0930-4416-8e01-579ca38b6bad" (UID: "d6e5bd49-0930-4416-8e01-579ca38b6bad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.274257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9" (OuterVolumeSpecName: "kube-api-access-wnjp9") pod "d6e5bd49-0930-4416-8e01-579ca38b6bad" (UID: "d6e5bd49-0930-4416-8e01-579ca38b6bad"). InnerVolumeSpecName "kube-api-access-wnjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.328927 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6e5bd49-0930-4416-8e01-579ca38b6bad" (UID: "d6e5bd49-0930-4416-8e01-579ca38b6bad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.370452 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.370491 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e5bd49-0930-4416-8e01-579ca38b6bad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.370507 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnjp9\" (UniqueName: \"kubernetes.io/projected/d6e5bd49-0930-4416-8e01-579ca38b6bad-kube-api-access-wnjp9\") on node \"crc\" DevicePath \"\"" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.872955 4780 generic.go:334] "Generic (PLEG): container finished" podID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerID="22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e" exitCode=0 Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.873065 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbv8" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.873060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerDied","Data":"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e"} Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.873545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbv8" event={"ID":"d6e5bd49-0930-4416-8e01-579ca38b6bad","Type":"ContainerDied","Data":"4fa8e5e2e397cd8f384917d1d2e22110257fe7a920532a289f34622669840de6"} Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.873578 4780 scope.go:117] "RemoveContainer" containerID="22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.889057 4780 scope.go:117] "RemoveContainer" containerID="3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.910197 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.919326 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gbv8"] Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.924571 4780 scope.go:117] "RemoveContainer" containerID="85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.938954 4780 scope.go:117] "RemoveContainer" containerID="22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e" Feb 19 09:05:07 crc kubenswrapper[4780]: E0219 09:05:07.939631 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e\": container with ID starting with 22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e not found: ID does not exist" containerID="22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.939679 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e"} err="failed to get container status \"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e\": rpc error: code = NotFound desc = could not find container \"22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e\": container with ID starting with 22130a19504a94e8f220ba35c1f6f567269a45a70ebefc35c913567bcfce850e not found: ID does not exist" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.939706 4780 scope.go:117] "RemoveContainer" containerID="3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6" Feb 19 09:05:07 crc kubenswrapper[4780]: E0219 09:05:07.940164 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6\": container with ID starting with 3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6 not found: ID does not exist" containerID="3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.940207 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6"} err="failed to get container status \"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6\": rpc error: code = NotFound desc = could not find container \"3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6\": container with ID starting with 3de1c74c3c5d793c9e1f87b3801b111cd72132664f726abab7d8125477a819d6 not found: ID does not exist" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.940236 4780 scope.go:117] "RemoveContainer" containerID="85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4" Feb 19 09:05:07 crc kubenswrapper[4780]: E0219 09:05:07.940558 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4\": container with ID starting with 85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4 not found: ID does not exist" containerID="85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.940593 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4"} err="failed to get container status \"85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4\": rpc error: code = NotFound desc = could not find container \"85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4\": container with ID starting with 85c37bb7ec8b5a89250fedf826d4ba0422f9afa464b1cd7c387dcbfbc46789c4 not found: ID does not exist" Feb 19 09:05:07 crc kubenswrapper[4780]: I0219 09:05:07.947462 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" path="/var/lib/kubelet/pods/d6e5bd49-0930-4416-8e01-579ca38b6bad/volumes" Feb 19 09:05:36 crc kubenswrapper[4780]: I0219 09:05:36.336012 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:05:36 crc kubenswrapper[4780]: I0219 09:05:36.336614 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:06:06 crc kubenswrapper[4780]: I0219 09:06:06.336491 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:06:06 crc kubenswrapper[4780]: I0219 09:06:06.337401 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.336525 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.337119 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.337197 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.337929 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.337988 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f" gracePeriod=600 Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.626877 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f" exitCode=0 Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.626956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f"} Feb 19 09:06:36 crc kubenswrapper[4780]: I0219 09:06:36.627281 4780 scope.go:117] "RemoveContainer" containerID="1fb97eab442e2f1a805bd81067c98537760e338ecb8ab6ffaaaf25386c4d3f85" Feb 19 09:06:37 crc kubenswrapper[4780]: I0219 09:06:37.635959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db"} Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.094712 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:07:50 crc kubenswrapper[4780]: E0219 09:07:50.095747 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="registry-server" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.095768 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="registry-server" Feb 19 09:07:50 crc kubenswrapper[4780]: E0219 09:07:50.095834 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="extract-content" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.095844 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="extract-content" Feb 19 09:07:50 crc kubenswrapper[4780]: E0219 09:07:50.095859 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="extract-utilities" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.095868 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="extract-utilities" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.096304 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e5bd49-0930-4416-8e01-579ca38b6bad" containerName="registry-server" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.098042 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.108627 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.236519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.236612 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4xwp\" (UniqueName: \"kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.236680 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.337964 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.338037 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.338081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4xwp\" (UniqueName: \"kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.338607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.338627 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.358669 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4xwp\" (UniqueName: \"kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp\") pod \"certified-operators-5fsmz\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.417673 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:07:50 crc kubenswrapper[4780]: I0219 09:07:50.904465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:07:51 crc kubenswrapper[4780]: I0219 09:07:51.294448 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerID="e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200" exitCode=0 Feb 19 09:07:51 crc kubenswrapper[4780]: I0219 09:07:51.294557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerDied","Data":"e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200"} Feb 19 09:07:51 crc kubenswrapper[4780]: I0219 09:07:51.294759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerStarted","Data":"77f53bf0732305e573baa687e892c94592d6a1a89baf010e1131bc7d4af0ad39"} Feb 19 09:07:51 crc kubenswrapper[4780]: I0219 09:07:51.296470 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:07:54 crc kubenswrapper[4780]: I0219 09:07:54.317891 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerID="848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413" exitCode=0 Feb 19 09:07:54 crc kubenswrapper[4780]: I0219 09:07:54.318032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerDied","Data":"848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413"} Feb 19 09:07:56 crc kubenswrapper[4780]: I0219 09:07:56.341782 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerStarted","Data":"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25"} Feb 19 09:07:56 crc kubenswrapper[4780]: I0219 09:07:56.368434 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fsmz" podStartSLOduration=2.007958503 podStartE2EDuration="6.368414515s" podCreationTimestamp="2026-02-19 09:07:50 +0000 UTC" firstStartedPulling="2026-02-19 09:07:51.296117885 +0000 UTC m=+2814.039775334" lastFinishedPulling="2026-02-19 09:07:55.656573887 +0000 UTC m=+2818.400231346" observedRunningTime="2026-02-19 09:07:56.363430892 +0000 UTC m=+2819.107088341" watchObservedRunningTime="2026-02-19 09:07:56.368414515 +0000 UTC m=+2819.112071954" Feb 19 09:08:00 crc kubenswrapper[4780]: I0219 09:08:00.418090 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:00 crc kubenswrapper[4780]: I0219 09:08:00.419359 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:00 crc kubenswrapper[4780]: I0219 09:08:00.481376 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:01 crc kubenswrapper[4780]: I0219 09:08:01.432098 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:01 crc kubenswrapper[4780]: I0219 09:08:01.503650 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.394840 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fsmz" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="registry-server" containerID="cri-o://df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25" gracePeriod=2 Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.858911 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.947488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xwp\" (UniqueName: \"kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp\") pod \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.948553 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities\") pod \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.948646 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content\") pod \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\" (UID: \"1e81fddf-4cb9-4f56-91c1-6e0d418a455f\") " Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.949195 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities" (OuterVolumeSpecName: "utilities") pod "1e81fddf-4cb9-4f56-91c1-6e0d418a455f" (UID: "1e81fddf-4cb9-4f56-91c1-6e0d418a455f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:08:03 crc kubenswrapper[4780]: I0219 09:08:03.952316 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp" (OuterVolumeSpecName: "kube-api-access-w4xwp") pod "1e81fddf-4cb9-4f56-91c1-6e0d418a455f" (UID: "1e81fddf-4cb9-4f56-91c1-6e0d418a455f"). InnerVolumeSpecName "kube-api-access-w4xwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.009990 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e81fddf-4cb9-4f56-91c1-6e0d418a455f" (UID: "1e81fddf-4cb9-4f56-91c1-6e0d418a455f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.050118 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xwp\" (UniqueName: \"kubernetes.io/projected/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-kube-api-access-w4xwp\") on node \"crc\" DevicePath \"\"" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.050193 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.050202 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e81fddf-4cb9-4f56-91c1-6e0d418a455f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.407438 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerID="df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25" exitCode=0 Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.407504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerDied","Data":"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25"} Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.407583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fsmz" event={"ID":"1e81fddf-4cb9-4f56-91c1-6e0d418a455f","Type":"ContainerDied","Data":"77f53bf0732305e573baa687e892c94592d6a1a89baf010e1131bc7d4af0ad39"} Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.407608 4780 scope.go:117] "RemoveContainer" containerID="df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.407522 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fsmz" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.429846 4780 scope.go:117] "RemoveContainer" containerID="848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.457738 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.472967 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fsmz"] Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.474109 4780 scope.go:117] "RemoveContainer" containerID="e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.508237 4780 scope.go:117] "RemoveContainer" containerID="df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25" Feb 19 09:08:04 crc kubenswrapper[4780]: E0219 09:08:04.508681 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25\": container with ID starting with df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25 not found: ID does not exist" containerID="df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.508731 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25"} err="failed to get container status \"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25\": rpc error: code = NotFound desc = could not find container \"df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25\": container with ID starting with df3a6f45c0a35411eb6cfb3cf85635062a5c01adca80e6322db1df2b82ea9a25 not found: ID does not exist" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.508763 4780 scope.go:117] "RemoveContainer" containerID="848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413" Feb 19 09:08:04 crc kubenswrapper[4780]: E0219 09:08:04.509071 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413\": container with ID starting with 848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413 not found: ID does not exist" containerID="848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.509109 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413"} err="failed to get container status \"848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413\": rpc error: code = NotFound desc = could not find container \"848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413\": container with ID starting with 848e60570b7cec4b70f6c70157be1a78e2c600dd06220350f0fbf28cb81a0413 not found: ID does not exist" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.509152 4780 scope.go:117] "RemoveContainer" containerID="e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200" Feb 19 09:08:04 crc kubenswrapper[4780]: E0219 09:08:04.509443 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200\": container with ID starting with e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200 not found: ID does not exist" containerID="e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200" Feb 19 09:08:04 crc kubenswrapper[4780]: I0219 09:08:04.509474 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200"} err="failed to get container status \"e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200\": rpc error: code = NotFound desc = could not find container \"e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200\": container with ID starting with e2f351a9ebd78c1d992769885f663b3b814a05b1bd89c686382a21a595908200 not found: ID does not exist" Feb 19 09:08:05 crc kubenswrapper[4780]: I0219 09:08:05.946555 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" path="/var/lib/kubelet/pods/1e81fddf-4cb9-4f56-91c1-6e0d418a455f/volumes" Feb 19 09:08:36 crc kubenswrapper[4780]: I0219 09:08:36.336746 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:08:36 crc kubenswrapper[4780]: I0219 09:08:36.337414 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.305337 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:08:53 crc kubenswrapper[4780]: E0219 09:08:53.306350 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="extract-utilities" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.306368 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="extract-utilities" Feb 19 09:08:53 crc kubenswrapper[4780]: E0219 09:08:53.306397 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="registry-server" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.306407 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="registry-server" Feb 19 09:08:53 crc kubenswrapper[4780]: E0219 09:08:53.306436 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="extract-content" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.306446 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="extract-content" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.306643 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e81fddf-4cb9-4f56-91c1-6e0d418a455f" containerName="registry-server" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.308082 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.314093 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.373267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpfp\" (UniqueName: \"kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.373381 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.373433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.474758 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqpfp\" (UniqueName: \"kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.474822 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.474896 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.475634 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.476326 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.497615 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqpfp\" (UniqueName: \"kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp\") pod \"redhat-marketplace-vkpr7\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:53 crc kubenswrapper[4780]: I0219 09:08:53.629923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:08:54 crc kubenswrapper[4780]: I0219 09:08:54.045810 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:08:54 crc kubenswrapper[4780]: I0219 09:08:54.779584 4780 generic.go:334] "Generic (PLEG): container finished" podID="5222bc83-e69d-45d5-8354-aef46f703e01" containerID="699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6" exitCode=0 Feb 19 09:08:54 crc kubenswrapper[4780]: I0219 09:08:54.779681 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerDied","Data":"699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6"} Feb 19 09:08:54 crc kubenswrapper[4780]: I0219 09:08:54.780170 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerStarted","Data":"fb76e5b7b4c372e2ecf35ddd92c517a798af2ccc42a150e74e3f91bb97253144"} Feb 19 09:08:57 crc kubenswrapper[4780]: I0219 09:08:57.804594 4780 generic.go:334] "Generic (PLEG): container finished" podID="5222bc83-e69d-45d5-8354-aef46f703e01" containerID="eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69" exitCode=0 Feb 19 09:08:57 crc kubenswrapper[4780]: I0219 09:08:57.804634 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerDied","Data":"eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69"} Feb 19 09:08:59 crc kubenswrapper[4780]: I0219 09:08:59.822456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerStarted","Data":"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08"} Feb 19 09:08:59 crc kubenswrapper[4780]: I0219 09:08:59.842995 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkpr7" podStartSLOduration=2.891554765 podStartE2EDuration="6.842966802s" podCreationTimestamp="2026-02-19 09:08:53 +0000 UTC" firstStartedPulling="2026-02-19 09:08:54.781827432 +0000 UTC m=+2877.525484891" lastFinishedPulling="2026-02-19 09:08:58.733239439 +0000 UTC m=+2881.476896928" observedRunningTime="2026-02-19 09:08:59.840676125 +0000 UTC m=+2882.584333574" watchObservedRunningTime="2026-02-19 09:08:59.842966802 +0000 UTC m=+2882.586624251" Feb 19 09:09:03 crc kubenswrapper[4780]: I0219 09:09:03.630319 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:03 crc kubenswrapper[4780]: I0219 09:09:03.630603 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:03 crc kubenswrapper[4780]: I0219 09:09:03.672742 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:03 crc kubenswrapper[4780]: I0219 09:09:03.892330 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:03 crc kubenswrapper[4780]: I0219 09:09:03.935494 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:09:05 crc kubenswrapper[4780]: I0219 09:09:05.868403 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkpr7" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="registry-server" containerID="cri-o://d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08" gracePeriod=2 Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.336406 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.336492 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.452196 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.507984 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities\") pod \"5222bc83-e69d-45d5-8354-aef46f703e01\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.508341 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqpfp\" (UniqueName: \"kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp\") pod \"5222bc83-e69d-45d5-8354-aef46f703e01\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.508369 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content\") pod \"5222bc83-e69d-45d5-8354-aef46f703e01\" (UID: \"5222bc83-e69d-45d5-8354-aef46f703e01\") " Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.509116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities" (OuterVolumeSpecName: "utilities") pod "5222bc83-e69d-45d5-8354-aef46f703e01" (UID: "5222bc83-e69d-45d5-8354-aef46f703e01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.516626 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp" (OuterVolumeSpecName: "kube-api-access-cqpfp") pod "5222bc83-e69d-45d5-8354-aef46f703e01" (UID: "5222bc83-e69d-45d5-8354-aef46f703e01"). InnerVolumeSpecName "kube-api-access-cqpfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.535079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5222bc83-e69d-45d5-8354-aef46f703e01" (UID: "5222bc83-e69d-45d5-8354-aef46f703e01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.610023 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.610078 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqpfp\" (UniqueName: \"kubernetes.io/projected/5222bc83-e69d-45d5-8354-aef46f703e01-kube-api-access-cqpfp\") on node \"crc\" DevicePath \"\"" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.610094 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5222bc83-e69d-45d5-8354-aef46f703e01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.878104 4780 generic.go:334] "Generic (PLEG): container finished" podID="5222bc83-e69d-45d5-8354-aef46f703e01" containerID="d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08" exitCode=0 Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.878180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerDied","Data":"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08"} Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.878257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkpr7" event={"ID":"5222bc83-e69d-45d5-8354-aef46f703e01","Type":"ContainerDied","Data":"fb76e5b7b4c372e2ecf35ddd92c517a798af2ccc42a150e74e3f91bb97253144"} Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.878286 4780 scope.go:117] "RemoveContainer" containerID="d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.878278 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkpr7" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.902219 4780 scope.go:117] "RemoveContainer" containerID="eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.931117 4780 scope.go:117] "RemoveContainer" containerID="699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.931523 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.941048 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkpr7"] Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.975826 4780 scope.go:117] "RemoveContainer" containerID="d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08" Feb 19 09:09:06 crc kubenswrapper[4780]: E0219 09:09:06.976457 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08\": container with ID starting with d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08 not found: ID does not exist" containerID="d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.976508 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08"} err="failed to get container status \"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08\": rpc error: code = NotFound desc = could not find container \"d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08\": container with ID starting with d1ad627f855965643137fb99990b215aeb95e0c196bc03c28e90deb0087b4c08 not found: ID does not exist" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.976541 4780 scope.go:117] "RemoveContainer" containerID="eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69" Feb 19 09:09:06 crc kubenswrapper[4780]: E0219 09:09:06.977134 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69\": container with ID starting with eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69 not found: ID does not exist" containerID="eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.977211 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69"} err="failed to get container status \"eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69\": rpc error: code = NotFound desc = could not find container \"eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69\": container with ID starting with eeb7b24821a6f0f7a6974e1c7ad91896a4fc1339c80b234022fb0de0e9c7ff69 not found: ID does not exist" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.977228 4780 scope.go:117] "RemoveContainer" containerID="699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6" Feb 19 09:09:06 crc kubenswrapper[4780]: E0219 09:09:06.977627 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6\": container with ID starting with 699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6 not found: ID does not exist" containerID="699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6" Feb 19 09:09:06 crc kubenswrapper[4780]: I0219 09:09:06.977654 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6"} err="failed to get container status \"699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6\": rpc error: code = NotFound desc = could not find container \"699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6\": container with ID starting with 699a7f16181ae996b4ebfa9ce98eadd3f4676099a9c18e5cf5536c3953afe6d6 not found: ID does not exist" Feb 19 09:09:07 crc kubenswrapper[4780]: I0219 09:09:07.954256 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" path="/var/lib/kubelet/pods/5222bc83-e69d-45d5-8354-aef46f703e01/volumes" Feb 19 09:09:36 crc kubenswrapper[4780]: I0219 09:09:36.336903 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:09:36 crc kubenswrapper[4780]: I0219 09:09:36.338618 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:09:36 crc kubenswrapper[4780]: I0219 09:09:36.338883 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:09:36 crc kubenswrapper[4780]: I0219 09:09:36.339673 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:09:36 crc kubenswrapper[4780]: I0219 09:09:36.339855 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" gracePeriod=600 Feb 19 09:09:36 crc kubenswrapper[4780]: E0219 09:09:36.485761 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:09:37 crc kubenswrapper[4780]: I0219 09:09:37.110837 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" exitCode=0 Feb 19 09:09:37 crc kubenswrapper[4780]: I0219 09:09:37.110924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db"} Feb 19 09:09:37 crc kubenswrapper[4780]: I0219 09:09:37.111238 4780 scope.go:117] "RemoveContainer" containerID="7ee6a0e52d6e52aa1866a6250e2bd79cff3f7b7bab09827977db36d1d7f8a62f" Feb 19 09:09:37 crc kubenswrapper[4780]: I0219 09:09:37.111793 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:09:37 crc kubenswrapper[4780]: E0219 09:09:37.112045 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:09:51 crc kubenswrapper[4780]: I0219 09:09:51.941454 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:09:51 crc kubenswrapper[4780]: E0219 09:09:51.952942 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:10:05 crc kubenswrapper[4780]: I0219 09:10:05.938601 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:10:05 crc kubenswrapper[4780]: E0219 09:10:05.939410 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:10:19 crc kubenswrapper[4780]: I0219 09:10:19.940072 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:10:19 crc kubenswrapper[4780]: E0219 09:10:19.942088 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:10:34 crc kubenswrapper[4780]: I0219 09:10:34.939576 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:10:34 crc kubenswrapper[4780]: E0219 09:10:34.940615 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:10:49 crc kubenswrapper[4780]: I0219 09:10:49.938697 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:10:49 crc kubenswrapper[4780]: E0219 09:10:49.940091 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:11:01 crc kubenswrapper[4780]: I0219 09:11:01.939634 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:11:01 crc kubenswrapper[4780]: E0219 09:11:01.941200 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:11:16 crc kubenswrapper[4780]: I0219 09:11:16.939261 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:11:16 crc kubenswrapper[4780]: E0219 09:11:16.940195 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.210624 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:27 crc kubenswrapper[4780]: E0219 09:11:27.211552 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="registry-server" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.211567 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="registry-server" Feb 19 09:11:27 crc kubenswrapper[4780]: E0219 09:11:27.211589 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="extract-content" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.211598 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="extract-content" Feb 19 09:11:27 crc kubenswrapper[4780]: E0219 09:11:27.211614 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="extract-utilities" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.211623 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="extract-utilities" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.211801 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5222bc83-e69d-45d5-8354-aef46f703e01" containerName="registry-server" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.213680 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.214639 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.214798 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.214976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54lqm\" (UniqueName: \"kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.238772 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.315886 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.315994 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54lqm\" (UniqueName: \"kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.316055 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.316841 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.319148 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.347446 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54lqm\" (UniqueName: \"kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm\") pod \"redhat-operators-mzrq4\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:27 crc kubenswrapper[4780]: I0219 09:11:27.535487 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:28 crc kubenswrapper[4780]: I0219 09:11:28.046427 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:28 crc kubenswrapper[4780]: I0219 09:11:28.435933 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerStarted","Data":"1a67535c0384d9a938e4a8fb5b5ab18091bafe7667a7a6287701675f710c792a"} Feb 19 09:11:28 crc kubenswrapper[4780]: I0219 09:11:28.938489 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:11:28 crc kubenswrapper[4780]: E0219 09:11:28.938847 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:11:29 crc kubenswrapper[4780]: I0219 09:11:29.447851 4780 generic.go:334] "Generic (PLEG): container finished" podID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerID="bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0" exitCode=0 Feb 19 09:11:29 crc kubenswrapper[4780]: I0219 09:11:29.449111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerDied","Data":"bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0"} Feb 19 09:11:31 crc kubenswrapper[4780]: I0219 09:11:31.467330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerStarted","Data":"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6"} Feb 19 09:11:32 crc kubenswrapper[4780]: I0219 09:11:32.478246 4780 generic.go:334] "Generic (PLEG): container finished" podID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerID="428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6" exitCode=0 Feb 19 09:11:32 crc kubenswrapper[4780]: I0219 09:11:32.478332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerDied","Data":"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6"} Feb 19 09:11:34 crc kubenswrapper[4780]: I0219 09:11:34.516812 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerStarted","Data":"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d"} Feb 19 09:11:34 crc kubenswrapper[4780]: I0219 09:11:34.553599 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mzrq4" podStartSLOduration=3.554154662 podStartE2EDuration="7.553578057s" podCreationTimestamp="2026-02-19 09:11:27 +0000 UTC" firstStartedPulling="2026-02-19 09:11:29.450427977 +0000 UTC m=+3032.194085426" lastFinishedPulling="2026-02-19 09:11:33.449851372 +0000 UTC m=+3036.193508821" observedRunningTime="2026-02-19 09:11:34.54925586 +0000 UTC m=+3037.292913319" watchObservedRunningTime="2026-02-19 09:11:34.553578057 +0000 UTC m=+3037.297235506" Feb 19 09:11:37 crc kubenswrapper[4780]: I0219 09:11:37.536425 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:37 crc kubenswrapper[4780]: I0219 09:11:37.536963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:38 crc kubenswrapper[4780]: I0219 09:11:38.582009 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mzrq4" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="registry-server" probeResult="failure" output=< Feb 19 09:11:38 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 09:11:38 crc kubenswrapper[4780]: > Feb 19 09:11:41 crc kubenswrapper[4780]: I0219 09:11:41.938039 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:11:41 crc kubenswrapper[4780]: E0219 09:11:41.938803 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:11:47 crc kubenswrapper[4780]: I0219 09:11:47.596911 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:47 crc kubenswrapper[4780]: I0219 09:11:47.653191 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:47 crc kubenswrapper[4780]: I0219 09:11:47.843188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:48 crc kubenswrapper[4780]: I0219 09:11:48.827040 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mzrq4" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="registry-server" containerID="cri-o://deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d" gracePeriod=2 Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.809719 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.882532 4780 generic.go:334] "Generic (PLEG): container finished" podID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerID="deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d" exitCode=0 Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.882582 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerDied","Data":"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d"} Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.882608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzrq4" event={"ID":"30ea89f9-44ec-4a99-bbab-0eeb3c5465df","Type":"ContainerDied","Data":"1a67535c0384d9a938e4a8fb5b5ab18091bafe7667a7a6287701675f710c792a"} Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.882625 4780 scope.go:117] "RemoveContainer" containerID="deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.882753 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzrq4" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.904083 4780 scope.go:117] "RemoveContainer" containerID="428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.928707 4780 scope.go:117] "RemoveContainer" containerID="bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.942430 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54lqm\" (UniqueName: \"kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm\") pod \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.942572 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content\") pod \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.942605 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities\") pod \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\" (UID: \"30ea89f9-44ec-4a99-bbab-0eeb3c5465df\") " Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.944236 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities" (OuterVolumeSpecName: "utilities") pod "30ea89f9-44ec-4a99-bbab-0eeb3c5465df" (UID: "30ea89f9-44ec-4a99-bbab-0eeb3c5465df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.948851 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm" (OuterVolumeSpecName: "kube-api-access-54lqm") pod "30ea89f9-44ec-4a99-bbab-0eeb3c5465df" (UID: "30ea89f9-44ec-4a99-bbab-0eeb3c5465df"). InnerVolumeSpecName "kube-api-access-54lqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.958026 4780 scope.go:117] "RemoveContainer" containerID="deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d" Feb 19 09:11:49 crc kubenswrapper[4780]: E0219 09:11:49.961947 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d\": container with ID starting with deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d not found: ID does not exist" containerID="deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.961990 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d"} err="failed to get container status \"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d\": rpc error: code = NotFound desc = could not find container \"deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d\": container with ID starting with deb1b2f5aba0a45064fa21088dbe8f07322928f54181a327e7ddf57e80e8e57d not found: ID does not exist" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.962020 4780 scope.go:117] "RemoveContainer" containerID="428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6" Feb 19 09:11:49 crc kubenswrapper[4780]: E0219 09:11:49.962391 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6\": container with ID starting with 428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6 not found: ID does not exist" containerID="428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.962451 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6"} err="failed to get container status \"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6\": rpc error: code = NotFound desc = could not find container \"428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6\": container with ID starting with 428fb3b38ef333c70d1d44cee888396ca6558243c2bfdbf4490a49bb0c1747c6 not found: ID does not exist" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.962493 4780 scope.go:117] "RemoveContainer" containerID="bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0" Feb 19 09:11:49 crc kubenswrapper[4780]: E0219 09:11:49.963085 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0\": container with ID starting with bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0 not found: ID does not exist" containerID="bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0" Feb 19 09:11:49 crc kubenswrapper[4780]: I0219 09:11:49.963153 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0"} err="failed to get container status \"bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0\": rpc error: code = NotFound desc = could not find container \"bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0\": container with ID starting with bd6d67035f5b2d20ae996228fe7fe7092bcff85188c2bb789943231bd7b2efe0 not found: ID does not exist" Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.044363 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.044387 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54lqm\" (UniqueName: \"kubernetes.io/projected/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-kube-api-access-54lqm\") on node \"crc\" DevicePath \"\"" Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.078237 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30ea89f9-44ec-4a99-bbab-0eeb3c5465df" (UID: "30ea89f9-44ec-4a99-bbab-0eeb3c5465df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.145562 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ea89f9-44ec-4a99-bbab-0eeb3c5465df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.219075 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:50 crc kubenswrapper[4780]: I0219 09:11:50.226036 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mzrq4"] Feb 19 09:11:51 crc kubenswrapper[4780]: I0219 09:11:51.953399 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" path="/var/lib/kubelet/pods/30ea89f9-44ec-4a99-bbab-0eeb3c5465df/volumes" Feb 19 09:11:53 crc kubenswrapper[4780]: I0219 09:11:53.938001 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:11:53 crc kubenswrapper[4780]: E0219 09:11:53.938553 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:12:07 crc kubenswrapper[4780]: I0219 09:12:07.938694 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:12:07 crc kubenswrapper[4780]: E0219 09:12:07.939485 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:12:22 crc kubenswrapper[4780]: I0219 09:12:22.939569 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:12:22 crc kubenswrapper[4780]: E0219 09:12:22.941065 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:12:34 crc kubenswrapper[4780]: I0219 09:12:34.938030 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:12:34 crc kubenswrapper[4780]: E0219 09:12:34.938738 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:12:46 crc kubenswrapper[4780]: I0219 09:12:46.938623 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:12:46 crc kubenswrapper[4780]: E0219 09:12:46.939706 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:13:00 crc kubenswrapper[4780]: I0219 09:13:00.938382 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:13:00 crc kubenswrapper[4780]: E0219 09:13:00.940240 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:13:12 crc kubenswrapper[4780]: I0219 09:13:12.938518 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:13:12 crc kubenswrapper[4780]: E0219 09:13:12.939374 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:13:27 crc kubenswrapper[4780]: I0219 09:13:27.943667 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:13:27 crc kubenswrapper[4780]: E0219 09:13:27.944448 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:13:38 crc kubenswrapper[4780]: I0219 09:13:38.938825 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:13:38 crc kubenswrapper[4780]: E0219 09:13:38.939745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:13:51 crc kubenswrapper[4780]: I0219 09:13:51.938163 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:13:51 crc kubenswrapper[4780]: E0219 09:13:51.938949 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:14:02 crc kubenswrapper[4780]: I0219 09:14:02.938355 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:14:02 crc kubenswrapper[4780]: E0219 09:14:02.939081 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:14:16 crc kubenswrapper[4780]: I0219 09:14:16.945653 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:14:16 crc kubenswrapper[4780]: E0219 09:14:16.947110 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:14:31 crc kubenswrapper[4780]: I0219 09:14:31.938733 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:14:31 crc kubenswrapper[4780]: E0219 09:14:31.939402 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:14:44 crc kubenswrapper[4780]: I0219 09:14:44.939071 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:14:45 crc kubenswrapper[4780]: I0219 09:14:45.344287 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514"} Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.149766 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn"] Feb 19 09:15:00 crc kubenswrapper[4780]: E0219 09:15:00.150907 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="registry-server" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.150928 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="registry-server" Feb 19 09:15:00 crc kubenswrapper[4780]: E0219 09:15:00.150954 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="extract-utilities" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.150962 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="extract-utilities" Feb 19 09:15:00 crc kubenswrapper[4780]: E0219 09:15:00.150988 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="extract-content" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.150997 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="extract-content" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.151212 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ea89f9-44ec-4a99-bbab-0eeb3c5465df" containerName="registry-server" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.151794 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.156181 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.156593 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.165843 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn"] Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.202980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.203067 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcsf\" (UniqueName: \"kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.203394 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.304416 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.304509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.304542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vcsf\" (UniqueName: \"kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.305554 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.317970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.322055 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vcsf\" (UniqueName: \"kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf\") pod \"collect-profiles-29524875-pg6cn\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.474914 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:00 crc kubenswrapper[4780]: I0219 09:15:00.910780 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn"] Feb 19 09:15:00 crc kubenswrapper[4780]: W0219 09:15:00.915241 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d67ee36_1d79_4e07_84e1_ab2393377346.slice/crio-1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d WatchSource:0}: Error finding container 1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d: Status 404 returned error can't find the container with id 1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d Feb 19 09:15:01 crc kubenswrapper[4780]: I0219 09:15:01.490640 4780 generic.go:334] "Generic (PLEG): container finished" podID="3d67ee36-1d79-4e07-84e1-ab2393377346" containerID="9c09a31da98967b19cfe5486935febcfe62d898b9675671c01d5bca9860a3521" exitCode=0 Feb 19 09:15:01 crc kubenswrapper[4780]: I0219 09:15:01.490826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" event={"ID":"3d67ee36-1d79-4e07-84e1-ab2393377346","Type":"ContainerDied","Data":"9c09a31da98967b19cfe5486935febcfe62d898b9675671c01d5bca9860a3521"} Feb 19 09:15:01 crc kubenswrapper[4780]: I0219 09:15:01.491027 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" event={"ID":"3d67ee36-1d79-4e07-84e1-ab2393377346","Type":"ContainerStarted","Data":"1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d"} Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.111077 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.112978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.163723 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.230952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.231003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.231026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfsvd\" (UniqueName: \"kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.332233 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.332275 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfsvd\" (UniqueName: \"kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.332380 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.332783 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.332847 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.351183 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfsvd\" (UniqueName: \"kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd\") pod \"community-operators-v4gmf\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.431233 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.734499 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.839721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume\") pod \"3d67ee36-1d79-4e07-84e1-ab2393377346\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.839770 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vcsf\" (UniqueName: \"kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf\") pod \"3d67ee36-1d79-4e07-84e1-ab2393377346\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.839907 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume\") pod \"3d67ee36-1d79-4e07-84e1-ab2393377346\" (UID: \"3d67ee36-1d79-4e07-84e1-ab2393377346\") " Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.840768 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d67ee36-1d79-4e07-84e1-ab2393377346" (UID: "3d67ee36-1d79-4e07-84e1-ab2393377346"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.845890 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf" (OuterVolumeSpecName: "kube-api-access-2vcsf") pod "3d67ee36-1d79-4e07-84e1-ab2393377346" (UID: "3d67ee36-1d79-4e07-84e1-ab2393377346"). InnerVolumeSpecName "kube-api-access-2vcsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.845916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d67ee36-1d79-4e07-84e1-ab2393377346" (UID: "3d67ee36-1d79-4e07-84e1-ab2393377346"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.942096 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d67ee36-1d79-4e07-84e1-ab2393377346-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.942451 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d67ee36-1d79-4e07-84e1-ab2393377346-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.942468 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vcsf\" (UniqueName: \"kubernetes.io/projected/3d67ee36-1d79-4e07-84e1-ab2393377346-kube-api-access-2vcsf\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:02 crc kubenswrapper[4780]: I0219 09:15:02.974151 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.511628 4780 generic.go:334] "Generic (PLEG): container finished" podID="65889cff-9ae6-4e54-994c-d3742363db8d" containerID="57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e" exitCode=0 Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.511745 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerDied","Data":"57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e"} Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.512046 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerStarted","Data":"4eceafbc3ba771531c05c60ef91bd383bfae3b6fa121f6d5d3b4e466d5de6c1b"} Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.515195 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.515404 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" event={"ID":"3d67ee36-1d79-4e07-84e1-ab2393377346","Type":"ContainerDied","Data":"1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d"} Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.515441 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fe72c172903e7d88c676b45c7afae0048007e6cebdc1c90ca8c8091fe7b189d" Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.515631 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn" Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.804717 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc"] Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.810094 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-wl9dc"] Feb 19 09:15:03 crc kubenswrapper[4780]: I0219 09:15:03.946774 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e11610c-3763-4c6b-954a-bce83e2aec5a" path="/var/lib/kubelet/pods/7e11610c-3763-4c6b-954a-bce83e2aec5a/volumes" Feb 19 09:15:04 crc kubenswrapper[4780]: I0219 09:15:04.042850 4780 scope.go:117] "RemoveContainer" containerID="d6b56a3c06eb41659aab416a6300badd7083bc5f86557902bdff626df48241c4" Feb 19 09:15:05 crc kubenswrapper[4780]: I0219 09:15:05.531724 4780 generic.go:334] "Generic (PLEG): container finished" podID="65889cff-9ae6-4e54-994c-d3742363db8d" containerID="4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819" exitCode=0 Feb 19 09:15:05 crc kubenswrapper[4780]: I0219 09:15:05.531779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerDied","Data":"4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819"} Feb 19 09:15:06 crc kubenswrapper[4780]: I0219 09:15:06.543460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerStarted","Data":"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4"} Feb 19 09:15:06 crc kubenswrapper[4780]: I0219 09:15:06.568499 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v4gmf" podStartSLOduration=2.151919526 podStartE2EDuration="4.568479011s" podCreationTimestamp="2026-02-19 09:15:02 +0000 UTC" firstStartedPulling="2026-02-19 09:15:03.514921927 +0000 UTC m=+3246.258579376" lastFinishedPulling="2026-02-19 09:15:05.931481422 +0000 UTC m=+3248.675138861" observedRunningTime="2026-02-19 09:15:06.562602695 +0000 UTC m=+3249.306260154" watchObservedRunningTime="2026-02-19 09:15:06.568479011 +0000 UTC m=+3249.312136460" Feb 19 09:15:12 crc kubenswrapper[4780]: I0219 09:15:12.432166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:12 crc kubenswrapper[4780]: I0219 09:15:12.432247 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:12 crc kubenswrapper[4780]: I0219 09:15:12.501495 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:12 crc kubenswrapper[4780]: I0219 09:15:12.631457 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:12 crc kubenswrapper[4780]: I0219 09:15:12.735234 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:14 crc kubenswrapper[4780]: I0219 09:15:14.607442 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v4gmf" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="registry-server" containerID="cri-o://c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4" gracePeriod=2 Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.022105 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.128260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfsvd\" (UniqueName: \"kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd\") pod \"65889cff-9ae6-4e54-994c-d3742363db8d\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.128338 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities\") pod \"65889cff-9ae6-4e54-994c-d3742363db8d\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.128400 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content\") pod \"65889cff-9ae6-4e54-994c-d3742363db8d\" (UID: \"65889cff-9ae6-4e54-994c-d3742363db8d\") " Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.129608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities" (OuterVolumeSpecName: "utilities") pod "65889cff-9ae6-4e54-994c-d3742363db8d" (UID: "65889cff-9ae6-4e54-994c-d3742363db8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.138359 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd" (OuterVolumeSpecName: "kube-api-access-dfsvd") pod "65889cff-9ae6-4e54-994c-d3742363db8d" (UID: "65889cff-9ae6-4e54-994c-d3742363db8d"). InnerVolumeSpecName "kube-api-access-dfsvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.192643 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65889cff-9ae6-4e54-994c-d3742363db8d" (UID: "65889cff-9ae6-4e54-994c-d3742363db8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.229828 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.229888 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfsvd\" (UniqueName: \"kubernetes.io/projected/65889cff-9ae6-4e54-994c-d3742363db8d-kube-api-access-dfsvd\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.229901 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65889cff-9ae6-4e54-994c-d3742363db8d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.632365 4780 generic.go:334] "Generic (PLEG): container finished" podID="65889cff-9ae6-4e54-994c-d3742363db8d" containerID="c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4" exitCode=0 Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.632430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerDied","Data":"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4"} Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.632451 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4gmf" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.632470 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4gmf" event={"ID":"65889cff-9ae6-4e54-994c-d3742363db8d","Type":"ContainerDied","Data":"4eceafbc3ba771531c05c60ef91bd383bfae3b6fa121f6d5d3b4e466d5de6c1b"} Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.632500 4780 scope.go:117] "RemoveContainer" containerID="c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.654641 4780 scope.go:117] "RemoveContainer" containerID="4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.674313 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.680819 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v4gmf"] Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.697390 4780 scope.go:117] "RemoveContainer" containerID="57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.714226 4780 scope.go:117] "RemoveContainer" containerID="c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4" Feb 19 09:15:15 crc kubenswrapper[4780]: E0219 09:15:15.714573 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4\": container with ID starting with c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4 not found: ID does not exist" containerID="c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.714609 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4"} err="failed to get container status \"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4\": rpc error: code = NotFound desc = could not find container \"c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4\": container with ID starting with c0c8b2f8014e09307ac3e6ae96f61f8eb5a7e7c2f69493018a573812d308bbc4 not found: ID does not exist" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.714636 4780 scope.go:117] "RemoveContainer" containerID="4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819" Feb 19 09:15:15 crc kubenswrapper[4780]: E0219 09:15:15.714935 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819\": container with ID starting with 4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819 not found: ID does not exist" containerID="4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.714965 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819"} err="failed to get container status \"4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819\": rpc error: code = NotFound desc = could not find container \"4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819\": container with ID starting with 4c1b20c43366b056ff7df7041aedba4d7e33dca47853b6b7193cca455e7a3819 not found: ID does not exist" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.714982 4780 scope.go:117] "RemoveContainer" containerID="57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e" Feb 19 09:15:15 crc kubenswrapper[4780]: E0219 09:15:15.715254 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e\": container with ID starting with 57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e not found: ID does not exist" containerID="57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.715293 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e"} err="failed to get container status \"57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e\": rpc error: code = NotFound desc = could not find container \"57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e\": container with ID starting with 57017111ef029e45c21e9197960a7b008a12890b68ea42d407f598b034eeb38e not found: ID does not exist" Feb 19 09:15:15 crc kubenswrapper[4780]: I0219 09:15:15.947205 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" path="/var/lib/kubelet/pods/65889cff-9ae6-4e54-994c-d3742363db8d/volumes" Feb 19 09:17:06 crc kubenswrapper[4780]: I0219 09:17:06.336358 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:17:06 crc kubenswrapper[4780]: I0219 09:17:06.336982 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:17:36 crc kubenswrapper[4780]: I0219 09:17:36.336228 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:17:36 crc kubenswrapper[4780]: I0219 09:17:36.336932 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.335752 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.336261 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.336307 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.336954 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.337019 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514" gracePeriod=600 Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.997519 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514" exitCode=0 Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.997581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514"} Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.997882 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39"} Feb 19 09:18:06 crc kubenswrapper[4780]: I0219 09:18:06.997909 4780 scope.go:117] "RemoveContainer" containerID="7f0a0c9ba8102bc99ef7d03c36221089450c4024604d2355ce0ce27fe5f366db" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.526551 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:37 crc kubenswrapper[4780]: E0219 09:19:37.527573 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d67ee36-1d79-4e07-84e1-ab2393377346" containerName="collect-profiles" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d67ee36-1d79-4e07-84e1-ab2393377346" containerName="collect-profiles" Feb 19 09:19:37 crc kubenswrapper[4780]: E0219 09:19:37.527608 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="extract-utilities" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527616 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="extract-utilities" Feb 19 09:19:37 crc kubenswrapper[4780]: E0219 09:19:37.527640 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="registry-server" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527649 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="registry-server" Feb 19 09:19:37 crc kubenswrapper[4780]: E0219 09:19:37.527661 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="extract-content" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527667 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="extract-content" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527828 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d67ee36-1d79-4e07-84e1-ab2393377346" containerName="collect-profiles" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.527839 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="65889cff-9ae6-4e54-994c-d3742363db8d" containerName="registry-server" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.528846 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.544800 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.686477 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.686528 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.686614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8cd\" (UniqueName: \"kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.787623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8cd\" (UniqueName: \"kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.787715 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.787732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.788321 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.788846 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.809889 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8cd\" (UniqueName: \"kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd\") pod \"redhat-marketplace-stdnr\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:37 crc kubenswrapper[4780]: I0219 09:19:37.906365 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:38 crc kubenswrapper[4780]: I0219 09:19:38.193415 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:38 crc kubenswrapper[4780]: I0219 09:19:38.725744 4780 generic.go:334] "Generic (PLEG): container finished" podID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerID="1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e" exitCode=0 Feb 19 09:19:38 crc kubenswrapper[4780]: I0219 09:19:38.725807 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerDied","Data":"1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e"} Feb 19 09:19:38 crc kubenswrapper[4780]: I0219 09:19:38.726174 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerStarted","Data":"2a17f2dd60cf8a91cee804ac212c160ad84d639d3f4578d70b19a6ce9f52ebfd"} Feb 19 09:19:39 crc kubenswrapper[4780]: I0219 09:19:39.740314 4780 generic.go:334] "Generic (PLEG): container finished" podID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerID="ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416" exitCode=0 Feb 19 09:19:39 crc kubenswrapper[4780]: I0219 09:19:39.740418 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerDied","Data":"ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416"} Feb 19 09:19:40 crc kubenswrapper[4780]: I0219 09:19:40.749506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerStarted","Data":"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d"} Feb 19 09:19:40 crc kubenswrapper[4780]: I0219 09:19:40.781261 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-stdnr" podStartSLOduration=2.314146522 podStartE2EDuration="3.781239852s" podCreationTimestamp="2026-02-19 09:19:37 +0000 UTC" firstStartedPulling="2026-02-19 09:19:38.728568235 +0000 UTC m=+3521.472225684" lastFinishedPulling="2026-02-19 09:19:40.195661525 +0000 UTC m=+3522.939319014" observedRunningTime="2026-02-19 09:19:40.774737171 +0000 UTC m=+3523.518394630" watchObservedRunningTime="2026-02-19 09:19:40.781239852 +0000 UTC m=+3523.524897311" Feb 19 09:19:47 crc kubenswrapper[4780]: I0219 09:19:47.906875 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:47 crc kubenswrapper[4780]: I0219 09:19:47.907163 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:47 crc kubenswrapper[4780]: I0219 09:19:47.950821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:48 crc kubenswrapper[4780]: I0219 09:19:48.900074 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:48 crc kubenswrapper[4780]: I0219 09:19:48.977498 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:50 crc kubenswrapper[4780]: I0219 09:19:50.854486 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-stdnr" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="registry-server" containerID="cri-o://47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d" gracePeriod=2 Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.794477 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.863405 4780 generic.go:334] "Generic (PLEG): container finished" podID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerID="47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d" exitCode=0 Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.863467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerDied","Data":"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d"} Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.863528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stdnr" event={"ID":"851f0df6-53f8-4280-b05e-5606bb3658b5","Type":"ContainerDied","Data":"2a17f2dd60cf8a91cee804ac212c160ad84d639d3f4578d70b19a6ce9f52ebfd"} Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.863553 4780 scope.go:117] "RemoveContainer" containerID="47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.863481 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stdnr" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.885068 4780 scope.go:117] "RemoveContainer" containerID="ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.899498 4780 scope.go:117] "RemoveContainer" containerID="1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.921430 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content\") pod \"851f0df6-53f8-4280-b05e-5606bb3658b5\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.921683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8cd\" (UniqueName: \"kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd\") pod \"851f0df6-53f8-4280-b05e-5606bb3658b5\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.921859 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities\") pod \"851f0df6-53f8-4280-b05e-5606bb3658b5\" (UID: \"851f0df6-53f8-4280-b05e-5606bb3658b5\") " Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.922708 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities" (OuterVolumeSpecName: "utilities") pod "851f0df6-53f8-4280-b05e-5606bb3658b5" (UID: "851f0df6-53f8-4280-b05e-5606bb3658b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.924830 4780 scope.go:117] "RemoveContainer" containerID="47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d" Feb 19 09:19:51 crc kubenswrapper[4780]: E0219 09:19:51.925455 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d\": container with ID starting with 47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d not found: ID does not exist" containerID="47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.925535 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d"} err="failed to get container status \"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d\": rpc error: code = NotFound desc = could not find container \"47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d\": container with ID starting with 47d5b79ebe51b1dc7668a542a1b80ec12744a541120ca0e66acdf5b518cd334d not found: ID does not exist" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.925570 4780 scope.go:117] "RemoveContainer" containerID="ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416" Feb 19 09:19:51 crc kubenswrapper[4780]: E0219 09:19:51.926023 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416\": container with ID starting with ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416 not found: ID does not exist" containerID="ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.926058 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416"} err="failed to get container status \"ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416\": rpc error: code = NotFound desc = could not find container \"ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416\": container with ID starting with ce277a55f9759432e900f15de9f13234694f30b6fca503c23c0b03dabcb28416 not found: ID does not exist" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.926082 4780 scope.go:117] "RemoveContainer" containerID="1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e" Feb 19 09:19:51 crc kubenswrapper[4780]: E0219 09:19:51.926347 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e\": container with ID starting with 1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e not found: ID does not exist" containerID="1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.926437 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e"} err="failed to get container status \"1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e\": rpc error: code = NotFound desc = could not find container \"1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e\": container with ID starting with 1dded336829afe86e9dd831448cd0534adc456ea849a9f1e83d85723743f0b5e not found: ID does not exist" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.927287 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd" (OuterVolumeSpecName: "kube-api-access-8n8cd") pod "851f0df6-53f8-4280-b05e-5606bb3658b5" (UID: "851f0df6-53f8-4280-b05e-5606bb3658b5"). InnerVolumeSpecName "kube-api-access-8n8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:19:51 crc kubenswrapper[4780]: I0219 09:19:51.947862 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "851f0df6-53f8-4280-b05e-5606bb3658b5" (UID: "851f0df6-53f8-4280-b05e-5606bb3658b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:19:52 crc kubenswrapper[4780]: I0219 09:19:52.024037 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8cd\" (UniqueName: \"kubernetes.io/projected/851f0df6-53f8-4280-b05e-5606bb3658b5-kube-api-access-8n8cd\") on node \"crc\" DevicePath \"\"" Feb 19 09:19:52 crc kubenswrapper[4780]: I0219 09:19:52.024376 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:19:52 crc kubenswrapper[4780]: I0219 09:19:52.024514 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851f0df6-53f8-4280-b05e-5606bb3658b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:19:52 crc kubenswrapper[4780]: I0219 09:19:52.203877 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:52 crc kubenswrapper[4780]: I0219 09:19:52.217678 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-stdnr"] Feb 19 09:19:53 crc kubenswrapper[4780]: I0219 09:19:53.955538 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" path="/var/lib/kubelet/pods/851f0df6-53f8-4280-b05e-5606bb3658b5/volumes" Feb 19 09:20:06 crc kubenswrapper[4780]: I0219 09:20:06.335766 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:20:06 crc kubenswrapper[4780]: I0219 09:20:06.336285 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:20:36 crc kubenswrapper[4780]: I0219 09:20:36.336329 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:20:36 crc kubenswrapper[4780]: I0219 09:20:36.336850 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.336995 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.337675 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.337733 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.338387 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.338470 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" gracePeriod=600 Feb 19 09:21:06 crc kubenswrapper[4780]: E0219 09:21:06.492844 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.515552 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" exitCode=0 Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.515613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39"} Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.515664 4780 scope.go:117] "RemoveContainer" containerID="81aea7bc89e55250a479eb04de75ac3b9e4e9967ddf44a0d04f2d0e5b4e1f514" Feb 19 09:21:06 crc kubenswrapper[4780]: I0219 09:21:06.520464 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:21:06 crc kubenswrapper[4780]: E0219 09:21:06.520965 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:21:19 crc kubenswrapper[4780]: I0219 09:21:19.939684 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:21:19 crc kubenswrapper[4780]: E0219 09:21:19.941636 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.141941 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:22 crc kubenswrapper[4780]: E0219 09:21:22.142916 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="extract-utilities" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.142946 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="extract-utilities" Feb 19 09:21:22 crc kubenswrapper[4780]: E0219 09:21:22.142985 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="extract-content" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.143002 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="extract-content" Feb 19 09:21:22 crc kubenswrapper[4780]: E0219 09:21:22.143029 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="registry-server" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.143049 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="registry-server" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.143486 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="851f0df6-53f8-4280-b05e-5606bb3658b5" containerName="registry-server" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.145833 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.154177 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.304552 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.304619 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.304642 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skfjl\" (UniqueName: \"kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.406110 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.406224 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.406663 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.406255 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skfjl\" (UniqueName: \"kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.406917 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.429999 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skfjl\" (UniqueName: \"kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl\") pod \"certified-operators-jznqb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.467538 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:22 crc kubenswrapper[4780]: I0219 09:21:22.922822 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:23 crc kubenswrapper[4780]: I0219 09:21:23.669634 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerID="2f6f135462717383b882bc88bb8140be52a6ed7937ad0b7e14425a6dfa47592d" exitCode=0 Feb 19 09:21:23 crc kubenswrapper[4780]: I0219 09:21:23.669696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerDied","Data":"2f6f135462717383b882bc88bb8140be52a6ed7937ad0b7e14425a6dfa47592d"} Feb 19 09:21:23 crc kubenswrapper[4780]: I0219 09:21:23.669728 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerStarted","Data":"9d29b91a939ca4850bdb15907992295b94c2c0cbc1eb908ba5549e6ac79d776b"} Feb 19 09:21:23 crc kubenswrapper[4780]: I0219 09:21:23.673630 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:21:24 crc kubenswrapper[4780]: I0219 09:21:24.681548 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerStarted","Data":"71316b2a00f61fe0422b35000c1be689ac5f5578d43febdd41a70b3a7bab16c9"} Feb 19 09:21:25 crc kubenswrapper[4780]: I0219 09:21:25.692011 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerID="71316b2a00f61fe0422b35000c1be689ac5f5578d43febdd41a70b3a7bab16c9" exitCode=0 Feb 19 09:21:25 crc kubenswrapper[4780]: I0219 09:21:25.692156 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerDied","Data":"71316b2a00f61fe0422b35000c1be689ac5f5578d43febdd41a70b3a7bab16c9"} Feb 19 09:21:26 crc kubenswrapper[4780]: I0219 09:21:26.705424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerStarted","Data":"35566f042caa5da701071e910e3b7ce2ce85253721e29cbf3a41922379ce4f3e"} Feb 19 09:21:26 crc kubenswrapper[4780]: I0219 09:21:26.729177 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jznqb" podStartSLOduration=2.299284027 podStartE2EDuration="4.7291548s" podCreationTimestamp="2026-02-19 09:21:22 +0000 UTC" firstStartedPulling="2026-02-19 09:21:23.673169257 +0000 UTC m=+3626.416826716" lastFinishedPulling="2026-02-19 09:21:26.10304004 +0000 UTC m=+3628.846697489" observedRunningTime="2026-02-19 09:21:26.723533631 +0000 UTC m=+3629.467191080" watchObservedRunningTime="2026-02-19 09:21:26.7291548 +0000 UTC m=+3629.472812259" Feb 19 09:21:31 crc kubenswrapper[4780]: I0219 09:21:31.938796 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:21:31 crc kubenswrapper[4780]: E0219 09:21:31.940600 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:21:32 crc kubenswrapper[4780]: I0219 09:21:32.468449 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:32 crc kubenswrapper[4780]: I0219 09:21:32.468732 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:32 crc kubenswrapper[4780]: I0219 09:21:32.547877 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:32 crc kubenswrapper[4780]: I0219 09:21:32.789781 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:32 crc kubenswrapper[4780]: I0219 09:21:32.835375 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:34 crc kubenswrapper[4780]: I0219 09:21:34.765909 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jznqb" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="registry-server" containerID="cri-o://35566f042caa5da701071e910e3b7ce2ce85253721e29cbf3a41922379ce4f3e" gracePeriod=2 Feb 19 09:21:35 crc kubenswrapper[4780]: I0219 09:21:35.801513 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerID="35566f042caa5da701071e910e3b7ce2ce85253721e29cbf3a41922379ce4f3e" exitCode=0 Feb 19 09:21:35 crc kubenswrapper[4780]: I0219 09:21:35.801589 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerDied","Data":"35566f042caa5da701071e910e3b7ce2ce85253721e29cbf3a41922379ce4f3e"} Feb 19 09:21:35 crc kubenswrapper[4780]: I0219 09:21:35.869900 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.024739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skfjl\" (UniqueName: \"kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl\") pod \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.024931 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities\") pod \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.025325 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content\") pod \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\" (UID: \"2e4fbc67-a6e1-441c-bc5f-374c667d4efb\") " Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.026421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities" (OuterVolumeSpecName: "utilities") pod "2e4fbc67-a6e1-441c-bc5f-374c667d4efb" (UID: "2e4fbc67-a6e1-441c-bc5f-374c667d4efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.026802 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.031378 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl" (OuterVolumeSpecName: "kube-api-access-skfjl") pod "2e4fbc67-a6e1-441c-bc5f-374c667d4efb" (UID: "2e4fbc67-a6e1-441c-bc5f-374c667d4efb"). InnerVolumeSpecName "kube-api-access-skfjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.128359 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skfjl\" (UniqueName: \"kubernetes.io/projected/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-kube-api-access-skfjl\") on node \"crc\" DevicePath \"\"" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.302037 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e4fbc67-a6e1-441c-bc5f-374c667d4efb" (UID: "2e4fbc67-a6e1-441c-bc5f-374c667d4efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.331551 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fbc67-a6e1-441c-bc5f-374c667d4efb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.808752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jznqb" event={"ID":"2e4fbc67-a6e1-441c-bc5f-374c667d4efb","Type":"ContainerDied","Data":"9d29b91a939ca4850bdb15907992295b94c2c0cbc1eb908ba5549e6ac79d776b"} Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.808822 4780 scope.go:117] "RemoveContainer" containerID="35566f042caa5da701071e910e3b7ce2ce85253721e29cbf3a41922379ce4f3e" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.808819 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jznqb" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.841006 4780 scope.go:117] "RemoveContainer" containerID="71316b2a00f61fe0422b35000c1be689ac5f5578d43febdd41a70b3a7bab16c9" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.849007 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.860526 4780 scope.go:117] "RemoveContainer" containerID="2f6f135462717383b882bc88bb8140be52a6ed7937ad0b7e14425a6dfa47592d" Feb 19 09:21:36 crc kubenswrapper[4780]: I0219 09:21:36.860782 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jznqb"] Feb 19 09:21:37 crc kubenswrapper[4780]: I0219 09:21:37.953405 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" path="/var/lib/kubelet/pods/2e4fbc67-a6e1-441c-bc5f-374c667d4efb/volumes" Feb 19 09:21:45 crc kubenswrapper[4780]: I0219 09:21:45.938474 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:21:45 crc kubenswrapper[4780]: E0219 09:21:45.939173 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.728622 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:21:57 crc kubenswrapper[4780]: E0219 09:21:57.729268 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="extract-utilities" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.729286 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="extract-utilities" Feb 19 09:21:57 crc kubenswrapper[4780]: E0219 09:21:57.729309 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="registry-server" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.729318 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="registry-server" Feb 19 09:21:57 crc kubenswrapper[4780]: E0219 09:21:57.729328 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="extract-content" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.729335 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="extract-content" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.729496 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4fbc67-a6e1-441c-bc5f-374c667d4efb" containerName="registry-server" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.730525 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.741768 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.831110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.831219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.831250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchjw\" (UniqueName: \"kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.932616 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.932885 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.932982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchjw\" (UniqueName: \"kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.933259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.933383 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:57 crc kubenswrapper[4780]: I0219 09:21:57.953720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchjw\" (UniqueName: \"kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw\") pod \"redhat-operators-99dzp\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:58 crc kubenswrapper[4780]: I0219 09:21:58.107178 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:21:58 crc kubenswrapper[4780]: I0219 09:21:58.551434 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:21:59 crc kubenswrapper[4780]: I0219 09:21:58.999842 4780 generic.go:334] "Generic (PLEG): container finished" podID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerID="c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788" exitCode=0 Feb 19 09:21:59 crc kubenswrapper[4780]: I0219 09:21:59.000152 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerDied","Data":"c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788"} Feb 19 09:21:59 crc kubenswrapper[4780]: I0219 09:21:59.000183 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerStarted","Data":"faaa3fe7e0c8cf2e26a68e3a8607137bbf719ff1e6df72abc8ef629440408a87"} Feb 19 09:21:59 crc kubenswrapper[4780]: I0219 09:21:59.938966 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:21:59 crc kubenswrapper[4780]: E0219 09:21:59.939532 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:22:01 crc kubenswrapper[4780]: I0219 09:22:01.016939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerStarted","Data":"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee"} Feb 19 09:22:02 crc kubenswrapper[4780]: I0219 09:22:02.027438 4780 generic.go:334] "Generic (PLEG): container finished" podID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerID="c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee" exitCode=0 Feb 19 09:22:02 crc kubenswrapper[4780]: I0219 09:22:02.027520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerDied","Data":"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee"} Feb 19 09:22:03 crc kubenswrapper[4780]: I0219 09:22:03.036390 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerStarted","Data":"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063"} Feb 19 09:22:03 crc kubenswrapper[4780]: I0219 09:22:03.056433 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99dzp" podStartSLOduration=2.3381653829999998 podStartE2EDuration="6.056415672s" podCreationTimestamp="2026-02-19 09:21:57 +0000 UTC" firstStartedPulling="2026-02-19 09:21:59.010391658 +0000 UTC m=+3661.754049107" lastFinishedPulling="2026-02-19 09:22:02.728641917 +0000 UTC m=+3665.472299396" observedRunningTime="2026-02-19 09:22:03.05111347 +0000 UTC m=+3665.794770919" watchObservedRunningTime="2026-02-19 09:22:03.056415672 +0000 UTC m=+3665.800073121" Feb 19 09:22:08 crc kubenswrapper[4780]: I0219 09:22:08.108155 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:08 crc kubenswrapper[4780]: I0219 09:22:08.108498 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:09 crc kubenswrapper[4780]: I0219 09:22:09.162619 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-99dzp" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="registry-server" probeResult="failure" output=< Feb 19 09:22:09 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 09:22:09 crc kubenswrapper[4780]: > Feb 19 09:22:13 crc kubenswrapper[4780]: I0219 09:22:13.939105 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:22:13 crc kubenswrapper[4780]: E0219 09:22:13.939898 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:22:18 crc kubenswrapper[4780]: I0219 09:22:18.157485 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:18 crc kubenswrapper[4780]: I0219 09:22:18.204092 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:18 crc kubenswrapper[4780]: I0219 09:22:18.418597 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.153022 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-99dzp" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="registry-server" containerID="cri-o://baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063" gracePeriod=2 Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.624093 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.680251 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchjw\" (UniqueName: \"kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw\") pod \"55c32790-b489-4568-99d0-f1ca8fe92d5b\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.680352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content\") pod \"55c32790-b489-4568-99d0-f1ca8fe92d5b\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.680440 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities\") pod \"55c32790-b489-4568-99d0-f1ca8fe92d5b\" (UID: \"55c32790-b489-4568-99d0-f1ca8fe92d5b\") " Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.681601 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities" (OuterVolumeSpecName: "utilities") pod "55c32790-b489-4568-99d0-f1ca8fe92d5b" (UID: "55c32790-b489-4568-99d0-f1ca8fe92d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.686244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw" (OuterVolumeSpecName: "kube-api-access-pchjw") pod "55c32790-b489-4568-99d0-f1ca8fe92d5b" (UID: "55c32790-b489-4568-99d0-f1ca8fe92d5b"). InnerVolumeSpecName "kube-api-access-pchjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.781803 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchjw\" (UniqueName: \"kubernetes.io/projected/55c32790-b489-4568-99d0-f1ca8fe92d5b-kube-api-access-pchjw\") on node \"crc\" DevicePath \"\"" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.781843 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.829490 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55c32790-b489-4568-99d0-f1ca8fe92d5b" (UID: "55c32790-b489-4568-99d0-f1ca8fe92d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:22:20 crc kubenswrapper[4780]: I0219 09:22:20.884503 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c32790-b489-4568-99d0-f1ca8fe92d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.161475 4780 generic.go:334] "Generic (PLEG): container finished" podID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerID="baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063" exitCode=0 Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.161495 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerDied","Data":"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063"} Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.161635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99dzp" event={"ID":"55c32790-b489-4568-99d0-f1ca8fe92d5b","Type":"ContainerDied","Data":"faaa3fe7e0c8cf2e26a68e3a8607137bbf719ff1e6df72abc8ef629440408a87"} Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.161664 4780 scope.go:117] "RemoveContainer" containerID="baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.161515 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99dzp" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.192431 4780 scope.go:117] "RemoveContainer" containerID="c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.201786 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.207906 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99dzp"] Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.221971 4780 scope.go:117] "RemoveContainer" containerID="c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.246700 4780 scope.go:117] "RemoveContainer" containerID="baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063" Feb 19 09:22:21 crc kubenswrapper[4780]: E0219 09:22:21.247229 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063\": container with ID starting with baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063 not found: ID does not exist" containerID="baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.247275 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063"} err="failed to get container status \"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063\": rpc error: code = NotFound desc = could not find container \"baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063\": container with ID starting with baa0d9a71bff4370943bab63d9fdd8ac2ebf14ad3f61769ac365eab6449d7063 not found: ID does not exist" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.247303 4780 scope.go:117] "RemoveContainer" containerID="c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee" Feb 19 09:22:21 crc kubenswrapper[4780]: E0219 09:22:21.247711 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee\": container with ID starting with c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee not found: ID does not exist" containerID="c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.247752 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee"} err="failed to get container status \"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee\": rpc error: code = NotFound desc = could not find container \"c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee\": container with ID starting with c9384ffac5fd5269b3e7a407a7aaf5785179bdc48aae07ac1fa9cc767ec9fdee not found: ID does not exist" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.247782 4780 scope.go:117] "RemoveContainer" containerID="c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788" Feb 19 09:22:21 crc kubenswrapper[4780]: E0219 09:22:21.248141 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788\": container with ID starting with c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788 not found: ID does not exist" containerID="c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.248174 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788"} err="failed to get container status \"c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788\": rpc error: code = NotFound desc = could not find container \"c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788\": container with ID starting with c6e6c7ecf112b5384b5aab25474e8e8898f13c92965710212861852fbfc56788 not found: ID does not exist" Feb 19 09:22:21 crc kubenswrapper[4780]: I0219 09:22:21.955413 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" path="/var/lib/kubelet/pods/55c32790-b489-4568-99d0-f1ca8fe92d5b/volumes" Feb 19 09:22:27 crc kubenswrapper[4780]: I0219 09:22:27.943267 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:22:27 crc kubenswrapper[4780]: E0219 09:22:27.943746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:22:38 crc kubenswrapper[4780]: I0219 09:22:38.937953 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:22:38 crc kubenswrapper[4780]: E0219 09:22:38.938663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:22:51 crc kubenswrapper[4780]: I0219 09:22:51.938637 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:22:51 crc kubenswrapper[4780]: E0219 09:22:51.939596 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:23:02 crc kubenswrapper[4780]: I0219 09:23:02.940250 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:23:02 crc kubenswrapper[4780]: E0219 09:23:02.941457 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:23:15 crc kubenswrapper[4780]: I0219 09:23:15.941269 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:23:15 crc kubenswrapper[4780]: E0219 09:23:15.942747 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:23:30 crc kubenswrapper[4780]: I0219 09:23:30.939018 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:23:30 crc kubenswrapper[4780]: E0219 09:23:30.940202 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:23:43 crc kubenswrapper[4780]: I0219 09:23:43.937906 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:23:43 crc kubenswrapper[4780]: E0219 09:23:43.938785 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:23:56 crc kubenswrapper[4780]: I0219 09:23:56.938957 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:23:56 crc kubenswrapper[4780]: E0219 09:23:56.939465 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:24:07 crc kubenswrapper[4780]: I0219 09:24:07.944484 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:24:07 crc kubenswrapper[4780]: E0219 09:24:07.945549 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:24:19 crc kubenswrapper[4780]: I0219 09:24:19.940784 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:24:19 crc kubenswrapper[4780]: E0219 09:24:19.941664 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:24:34 crc kubenswrapper[4780]: I0219 09:24:34.938622 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:24:34 crc kubenswrapper[4780]: E0219 09:24:34.939473 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:24:45 crc kubenswrapper[4780]: I0219 09:24:45.938854 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:24:45 crc kubenswrapper[4780]: E0219 09:24:45.940062 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:24:59 crc kubenswrapper[4780]: I0219 09:24:59.938341 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:24:59 crc kubenswrapper[4780]: E0219 09:24:59.939276 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:25:10 crc kubenswrapper[4780]: I0219 09:25:10.939694 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:25:10 crc kubenswrapper[4780]: E0219 09:25:10.942338 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:25:21 crc kubenswrapper[4780]: I0219 09:25:21.938866 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:25:21 crc kubenswrapper[4780]: E0219 09:25:21.940277 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:25:34 crc kubenswrapper[4780]: I0219 09:25:34.938690 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:25:34 crc kubenswrapper[4780]: E0219 09:25:34.939323 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:25:45 crc kubenswrapper[4780]: I0219 09:25:45.938605 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:25:45 crc kubenswrapper[4780]: E0219 09:25:45.939188 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:25:58 crc kubenswrapper[4780]: I0219 09:25:58.938706 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:25:58 crc kubenswrapper[4780]: E0219 09:25:58.939814 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.130783 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:01 crc kubenswrapper[4780]: E0219 09:26:01.131836 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="registry-server" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.131854 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="registry-server" Feb 19 09:26:01 crc kubenswrapper[4780]: E0219 09:26:01.131874 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="extract-content" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.131880 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="extract-content" Feb 19 09:26:01 crc kubenswrapper[4780]: E0219 09:26:01.131890 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="extract-utilities" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.131898 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="extract-utilities" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.132064 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c32790-b489-4568-99d0-f1ca8fe92d5b" containerName="registry-server" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.133265 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.159170 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.233822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.233895 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.234036 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gfp\" (UniqueName: \"kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.335753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gfp\" (UniqueName: \"kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.335844 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.335874 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.336462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.336546 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.366223 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gfp\" (UniqueName: \"kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp\") pod \"community-operators-tfm6w\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.464886 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:01 crc kubenswrapper[4780]: I0219 09:26:01.986282 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:02 crc kubenswrapper[4780]: I0219 09:26:02.204926 4780 generic.go:334] "Generic (PLEG): container finished" podID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerID="7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b" exitCode=0 Feb 19 09:26:02 crc kubenswrapper[4780]: I0219 09:26:02.204981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerDied","Data":"7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b"} Feb 19 09:26:02 crc kubenswrapper[4780]: I0219 09:26:02.205013 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerStarted","Data":"bd4431f7186d10a0598a86225e9ad5df575274053c26599a4f611bf105fc6f92"} Feb 19 09:26:03 crc kubenswrapper[4780]: I0219 09:26:03.215602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerStarted","Data":"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a"} Feb 19 09:26:04 crc kubenswrapper[4780]: I0219 09:26:04.224713 4780 generic.go:334] "Generic (PLEG): container finished" podID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerID="06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a" exitCode=0 Feb 19 09:26:04 crc kubenswrapper[4780]: I0219 09:26:04.224781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerDied","Data":"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a"} Feb 19 09:26:05 crc kubenswrapper[4780]: I0219 09:26:05.236511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerStarted","Data":"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896"} Feb 19 09:26:05 crc kubenswrapper[4780]: I0219 09:26:05.275602 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfm6w" podStartSLOduration=1.585721277 podStartE2EDuration="4.275578636s" podCreationTimestamp="2026-02-19 09:26:01 +0000 UTC" firstStartedPulling="2026-02-19 09:26:02.207991816 +0000 UTC m=+3904.951649265" lastFinishedPulling="2026-02-19 09:26:04.897849175 +0000 UTC m=+3907.641506624" observedRunningTime="2026-02-19 09:26:05.274972221 +0000 UTC m=+3908.018629670" watchObservedRunningTime="2026-02-19 09:26:05.275578636 +0000 UTC m=+3908.019236085" Feb 19 09:26:11 crc kubenswrapper[4780]: I0219 09:26:11.465072 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:11 crc kubenswrapper[4780]: I0219 09:26:11.465632 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:11 crc kubenswrapper[4780]: I0219 09:26:11.503659 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:12 crc kubenswrapper[4780]: I0219 09:26:12.347366 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:12 crc kubenswrapper[4780]: I0219 09:26:12.390751 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:12 crc kubenswrapper[4780]: I0219 09:26:12.938322 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.294599 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1"} Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.294729 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfm6w" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="registry-server" containerID="cri-o://f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896" gracePeriod=2 Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.827741 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.861165 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content\") pod \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.861287 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9gfp\" (UniqueName: \"kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp\") pod \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.861636 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities\") pod \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\" (UID: \"7fb48b3c-91e0-4af2-9200-c613f9d43e03\") " Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.863580 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities" (OuterVolumeSpecName: "utilities") pod "7fb48b3c-91e0-4af2-9200-c613f9d43e03" (UID: "7fb48b3c-91e0-4af2-9200-c613f9d43e03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.875083 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.888194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp" (OuterVolumeSpecName: "kube-api-access-s9gfp") pod "7fb48b3c-91e0-4af2-9200-c613f9d43e03" (UID: "7fb48b3c-91e0-4af2-9200-c613f9d43e03"). InnerVolumeSpecName "kube-api-access-s9gfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.943171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fb48b3c-91e0-4af2-9200-c613f9d43e03" (UID: "7fb48b3c-91e0-4af2-9200-c613f9d43e03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.976439 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb48b3c-91e0-4af2-9200-c613f9d43e03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:26:14 crc kubenswrapper[4780]: I0219 09:26:14.976471 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9gfp\" (UniqueName: \"kubernetes.io/projected/7fb48b3c-91e0-4af2-9200-c613f9d43e03-kube-api-access-s9gfp\") on node \"crc\" DevicePath \"\"" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.315987 4780 generic.go:334] "Generic (PLEG): container finished" podID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerID="f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896" exitCode=0 Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.316030 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerDied","Data":"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896"} Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.316060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfm6w" event={"ID":"7fb48b3c-91e0-4af2-9200-c613f9d43e03","Type":"ContainerDied","Data":"bd4431f7186d10a0598a86225e9ad5df575274053c26599a4f611bf105fc6f92"} Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.316059 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfm6w" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.316077 4780 scope.go:117] "RemoveContainer" containerID="f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.346428 4780 scope.go:117] "RemoveContainer" containerID="06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.356467 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.366112 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfm6w"] Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.382238 4780 scope.go:117] "RemoveContainer" containerID="7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.399500 4780 scope.go:117] "RemoveContainer" containerID="f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896" Feb 19 09:26:15 crc kubenswrapper[4780]: E0219 09:26:15.399877 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896\": container with ID starting with f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896 not found: ID does not exist" containerID="f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.399924 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896"} err="failed to get container status \"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896\": rpc error: code = NotFound desc = could not find container \"f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896\": container with ID starting with f102035bd5685a3c18f0b48cda5f24c0ce265b7fea68f19fbb657e9478f96896 not found: ID does not exist" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.399944 4780 scope.go:117] "RemoveContainer" containerID="06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a" Feb 19 09:26:15 crc kubenswrapper[4780]: E0219 09:26:15.400358 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a\": container with ID starting with 06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a not found: ID does not exist" containerID="06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.400386 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a"} err="failed to get container status \"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a\": rpc error: code = NotFound desc = could not find container \"06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a\": container with ID starting with 06e8d3f60819477a1b767dc02552eca6ed86a72054809ba9fd725d0a88d6c94a not found: ID does not exist" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.400404 4780 scope.go:117] "RemoveContainer" containerID="7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b" Feb 19 09:26:15 crc kubenswrapper[4780]: E0219 09:26:15.400774 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b\": container with ID starting with 7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b not found: ID does not exist" containerID="7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.400812 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b"} err="failed to get container status \"7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b\": rpc error: code = NotFound desc = could not find container \"7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b\": container with ID starting with 7bf965a559384a3a736717b11ee707709526fbda2ac6f156377697d45d63a27b not found: ID does not exist" Feb 19 09:26:15 crc kubenswrapper[4780]: I0219 09:26:15.952791 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" path="/var/lib/kubelet/pods/7fb48b3c-91e0-4af2-9200-c613f9d43e03/volumes" Feb 19 09:28:36 crc kubenswrapper[4780]: I0219 09:28:36.336010 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:28:36 crc kubenswrapper[4780]: I0219 09:28:36.336535 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:29:06 crc kubenswrapper[4780]: I0219 09:29:06.335950 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:29:06 crc kubenswrapper[4780]: I0219 09:29:06.338321 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:29:36 crc kubenswrapper[4780]: I0219 09:29:36.337235 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:29:36 crc kubenswrapper[4780]: I0219 09:29:36.337898 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:29:36 crc kubenswrapper[4780]: I0219 09:29:36.337972 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:29:36 crc kubenswrapper[4780]: I0219 09:29:36.338573 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:29:36 crc kubenswrapper[4780]: I0219 09:29:36.338637 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1" gracePeriod=600 Feb 19 09:29:37 crc kubenswrapper[4780]: I0219 09:29:37.406756 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1" exitCode=0 Feb 19 09:29:37 crc kubenswrapper[4780]: I0219 09:29:37.406825 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1"} Feb 19 09:29:37 crc kubenswrapper[4780]: I0219 09:29:37.407599 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec"} Feb 19 09:29:37 crc kubenswrapper[4780]: I0219 09:29:37.407685 4780 scope.go:117] "RemoveContainer" containerID="dd33d84ae46cfa7e2c9476aacf2338da3ed900f64d254868681dc819f0360f39" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.214110 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv"] Feb 19 09:30:00 crc kubenswrapper[4780]: E0219 09:30:00.214825 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="registry-server" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.214838 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="registry-server" Feb 19 09:30:00 crc kubenswrapper[4780]: E0219 09:30:00.214866 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="extract-content" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.214874 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="extract-content" Feb 19 09:30:00 crc kubenswrapper[4780]: E0219 09:30:00.214894 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="extract-utilities" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.214901 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="extract-utilities" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.215033 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb48b3c-91e0-4af2-9200-c613f9d43e03" containerName="registry-server" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.215476 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.217984 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.218384 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.231013 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv"] Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.404200 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.404271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.404370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6bk6\" (UniqueName: \"kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.505917 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6bk6\" (UniqueName: \"kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.506165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.506230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.507637 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.517207 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.531748 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6bk6\" (UniqueName: \"kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6\") pod \"collect-profiles-29524890-tdjlv\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:00 crc kubenswrapper[4780]: I0219 09:30:00.535625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:01 crc kubenswrapper[4780]: I0219 09:30:01.014637 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv"] Feb 19 09:30:01 crc kubenswrapper[4780]: I0219 09:30:01.652530 4780 generic.go:334] "Generic (PLEG): container finished" podID="b4dff2c0-9172-46ce-815e-a844fe38de43" containerID="5d6adaca4a6f5001697489c44219015174615f185f5755550008db3c72ec1a5f" exitCode=0 Feb 19 09:30:01 crc kubenswrapper[4780]: I0219 09:30:01.652579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" event={"ID":"b4dff2c0-9172-46ce-815e-a844fe38de43","Type":"ContainerDied","Data":"5d6adaca4a6f5001697489c44219015174615f185f5755550008db3c72ec1a5f"} Feb 19 09:30:01 crc kubenswrapper[4780]: I0219 09:30:01.652865 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" event={"ID":"b4dff2c0-9172-46ce-815e-a844fe38de43","Type":"ContainerStarted","Data":"53289b8ea3f2ee9f9199bc5d3d51fbaf5e9e27ba3e5a8fb0b6a3bbe976e1c437"} Feb 19 09:30:02 crc kubenswrapper[4780]: I0219 09:30:02.988078 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.294170 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume\") pod \"b4dff2c0-9172-46ce-815e-a844fe38de43\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.294252 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6bk6\" (UniqueName: \"kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6\") pod \"b4dff2c0-9172-46ce-815e-a844fe38de43\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.294311 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume\") pod \"b4dff2c0-9172-46ce-815e-a844fe38de43\" (UID: \"b4dff2c0-9172-46ce-815e-a844fe38de43\") " Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.294958 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4dff2c0-9172-46ce-815e-a844fe38de43" (UID: "b4dff2c0-9172-46ce-815e-a844fe38de43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.299584 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4dff2c0-9172-46ce-815e-a844fe38de43" (UID: "b4dff2c0-9172-46ce-815e-a844fe38de43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.299657 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6" (OuterVolumeSpecName: "kube-api-access-d6bk6") pod "b4dff2c0-9172-46ce-815e-a844fe38de43" (UID: "b4dff2c0-9172-46ce-815e-a844fe38de43"). InnerVolumeSpecName "kube-api-access-d6bk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.395663 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4dff2c0-9172-46ce-815e-a844fe38de43-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.395699 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4dff2c0-9172-46ce-815e-a844fe38de43-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.395708 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6bk6\" (UniqueName: \"kubernetes.io/projected/b4dff2c0-9172-46ce-815e-a844fe38de43-kube-api-access-d6bk6\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.673726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" event={"ID":"b4dff2c0-9172-46ce-815e-a844fe38de43","Type":"ContainerDied","Data":"53289b8ea3f2ee9f9199bc5d3d51fbaf5e9e27ba3e5a8fb0b6a3bbe976e1c437"} Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.673760 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53289b8ea3f2ee9f9199bc5d3d51fbaf5e9e27ba3e5a8fb0b6a3bbe976e1c437" Feb 19 09:30:03 crc kubenswrapper[4780]: I0219 09:30:03.673802 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv" Feb 19 09:30:04 crc kubenswrapper[4780]: I0219 09:30:04.057830 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb"] Feb 19 09:30:04 crc kubenswrapper[4780]: I0219 09:30:04.074392 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524845-592cb"] Feb 19 09:30:05 crc kubenswrapper[4780]: I0219 09:30:05.951536 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6125ea06-2501-442e-b5b1-d44d92f9e162" path="/var/lib/kubelet/pods/6125ea06-2501-442e-b5b1-d44d92f9e162/volumes" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.545895 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:15 crc kubenswrapper[4780]: E0219 09:30:15.546812 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dff2c0-9172-46ce-815e-a844fe38de43" containerName="collect-profiles" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.546825 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dff2c0-9172-46ce-815e-a844fe38de43" containerName="collect-profiles" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.546965 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4dff2c0-9172-46ce-815e-a844fe38de43" containerName="collect-profiles" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.547995 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.570243 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.750202 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmx4\" (UniqueName: \"kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.750983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.751487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.852903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.852957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmx4\" (UniqueName: \"kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.853002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.853570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.853836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:15 crc kubenswrapper[4780]: I0219 09:30:15.874426 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmx4\" (UniqueName: \"kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4\") pod \"redhat-marketplace-vvsdd\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:16 crc kubenswrapper[4780]: I0219 09:30:16.169689 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:16 crc kubenswrapper[4780]: I0219 09:30:16.627646 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:16 crc kubenswrapper[4780]: I0219 09:30:16.783240 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerStarted","Data":"d0cf46765d09988d2f0df335152ffdd4066d519337f2c9fa5f498c213593261a"} Feb 19 09:30:17 crc kubenswrapper[4780]: I0219 09:30:17.792933 4780 generic.go:334] "Generic (PLEG): container finished" podID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerID="5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0" exitCode=0 Feb 19 09:30:17 crc kubenswrapper[4780]: I0219 09:30:17.793010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerDied","Data":"5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0"} Feb 19 09:30:17 crc kubenswrapper[4780]: I0219 09:30:17.795539 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:30:18 crc kubenswrapper[4780]: I0219 09:30:18.803208 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerStarted","Data":"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6"} Feb 19 09:30:19 crc kubenswrapper[4780]: I0219 09:30:19.812919 4780 generic.go:334] "Generic (PLEG): container finished" podID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerID="20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6" exitCode=0 Feb 19 09:30:19 crc kubenswrapper[4780]: I0219 09:30:19.813049 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerDied","Data":"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6"} Feb 19 09:30:20 crc kubenswrapper[4780]: I0219 09:30:20.824730 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerStarted","Data":"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f"} Feb 19 09:30:20 crc kubenswrapper[4780]: I0219 09:30:20.855593 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vvsdd" podStartSLOduration=3.218574778 podStartE2EDuration="5.855561964s" podCreationTimestamp="2026-02-19 09:30:15 +0000 UTC" firstStartedPulling="2026-02-19 09:30:17.795215004 +0000 UTC m=+4160.538872463" lastFinishedPulling="2026-02-19 09:30:20.4322022 +0000 UTC m=+4163.175859649" observedRunningTime="2026-02-19 09:30:20.849691091 +0000 UTC m=+4163.593348560" watchObservedRunningTime="2026-02-19 09:30:20.855561964 +0000 UTC m=+4163.599219443" Feb 19 09:30:26 crc kubenswrapper[4780]: I0219 09:30:26.170650 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:26 crc kubenswrapper[4780]: I0219 09:30:26.171473 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:26 crc kubenswrapper[4780]: I0219 09:30:26.244508 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:26 crc kubenswrapper[4780]: I0219 09:30:26.924095 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:26 crc kubenswrapper[4780]: I0219 09:30:26.979113 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:28 crc kubenswrapper[4780]: I0219 09:30:28.897627 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vvsdd" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="registry-server" containerID="cri-o://90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f" gracePeriod=2 Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.875219 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.908948 4780 generic.go:334] "Generic (PLEG): container finished" podID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerID="90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f" exitCode=0 Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.908975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerDied","Data":"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f"} Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.909036 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvsdd" Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.909055 4780 scope.go:117] "RemoveContainer" containerID="90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f" Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.909044 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvsdd" event={"ID":"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d","Type":"ContainerDied","Data":"d0cf46765d09988d2f0df335152ffdd4066d519337f2c9fa5f498c213593261a"} Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.946144 4780 scope.go:117] "RemoveContainer" containerID="20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6" Feb 19 09:30:29 crc kubenswrapper[4780]: I0219 09:30:29.970970 4780 scope.go:117] "RemoveContainer" containerID="5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.015502 4780 scope.go:117] "RemoveContainer" containerID="90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f" Feb 19 09:30:30 crc kubenswrapper[4780]: E0219 09:30:30.015800 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f\": container with ID starting with 90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f not found: ID does not exist" containerID="90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.015830 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f"} err="failed to get container status \"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f\": rpc error: code = NotFound desc = could not find container \"90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f\": container with ID starting with 90668f6458da018ddf3cf43e6f7cded68e0178b575a94950ad7be2a814277d2f not found: ID does not exist" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.015853 4780 scope.go:117] "RemoveContainer" containerID="20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6" Feb 19 09:30:30 crc kubenswrapper[4780]: E0219 09:30:30.016035 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6\": container with ID starting with 20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6 not found: ID does not exist" containerID="20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.016057 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6"} err="failed to get container status \"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6\": rpc error: code = NotFound desc = could not find container \"20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6\": container with ID starting with 20be9978bae1f1e98982a9daa6bb93c3b98f7a3b674bf754ea3c066bba5448f6 not found: ID does not exist" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.016071 4780 scope.go:117] "RemoveContainer" containerID="5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0" Feb 19 09:30:30 crc kubenswrapper[4780]: E0219 09:30:30.016473 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0\": container with ID starting with 5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0 not found: ID does not exist" containerID="5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.016495 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0"} err="failed to get container status \"5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0\": rpc error: code = NotFound desc = could not find container \"5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0\": container with ID starting with 5abbbb0a91a45134d52972931ebe01af4b4a6c0bd6f01cf6a9e6372ebb7fa4d0 not found: ID does not exist" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.067796 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities\") pod \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.067878 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjmx4\" (UniqueName: \"kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4\") pod \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.068000 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content\") pod \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\" (UID: \"57522f7c-c8dc-4fb4-96a7-11df96f5ff7d\") " Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.068873 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities" (OuterVolumeSpecName: "utilities") pod "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" (UID: "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.073742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4" (OuterVolumeSpecName: "kube-api-access-zjmx4") pod "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" (UID: "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d"). InnerVolumeSpecName "kube-api-access-zjmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.170051 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjmx4\" (UniqueName: \"kubernetes.io/projected/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-kube-api-access-zjmx4\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.170161 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.177531 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" (UID: "57522f7c-c8dc-4fb4-96a7-11df96f5ff7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.251369 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.270799 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvsdd"] Feb 19 09:30:30 crc kubenswrapper[4780]: I0219 09:30:30.271514 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:30:31 crc kubenswrapper[4780]: I0219 09:30:31.946313 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" path="/var/lib/kubelet/pods/57522f7c-c8dc-4fb4-96a7-11df96f5ff7d/volumes" Feb 19 09:31:04 crc kubenswrapper[4780]: I0219 09:31:04.405625 4780 scope.go:117] "RemoveContainer" containerID="f8f7e3f045950df5a293e367cc6a8fcc792e550728934fc773472a1eaa07e01e" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.489829 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:25 crc kubenswrapper[4780]: E0219 09:31:25.490689 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="extract-utilities" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.490701 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="extract-utilities" Feb 19 09:31:25 crc kubenswrapper[4780]: E0219 09:31:25.490713 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="extract-content" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.490719 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="extract-content" Feb 19 09:31:25 crc kubenswrapper[4780]: E0219 09:31:25.490732 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="registry-server" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.490738 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="registry-server" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.490918 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="57522f7c-c8dc-4fb4-96a7-11df96f5ff7d" containerName="registry-server" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.491896 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.499538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.620606 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.620818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.620945 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgbc\" (UniqueName: \"kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.722752 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.722839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.722878 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgbc\" (UniqueName: \"kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.723399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.723409 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.742961 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgbc\" (UniqueName: \"kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc\") pod \"certified-operators-rlt62\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:25 crc kubenswrapper[4780]: I0219 09:31:25.855997 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:26 crc kubenswrapper[4780]: I0219 09:31:26.383656 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:27 crc kubenswrapper[4780]: I0219 09:31:27.407713 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerID="44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f" exitCode=0 Feb 19 09:31:27 crc kubenswrapper[4780]: I0219 09:31:27.408083 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerDied","Data":"44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f"} Feb 19 09:31:27 crc kubenswrapper[4780]: I0219 09:31:27.408224 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerStarted","Data":"1c9b7e7ac86689ced4be59d216e6c9dbc2bddaa2348bc5444bd32e04d10bbbcc"} Feb 19 09:31:28 crc kubenswrapper[4780]: I0219 09:31:28.417523 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerStarted","Data":"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63"} Feb 19 09:31:29 crc kubenswrapper[4780]: I0219 09:31:29.441787 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerID="4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63" exitCode=0 Feb 19 09:31:29 crc kubenswrapper[4780]: I0219 09:31:29.441849 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerDied","Data":"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63"} Feb 19 09:31:30 crc kubenswrapper[4780]: I0219 09:31:30.451434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerStarted","Data":"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07"} Feb 19 09:31:30 crc kubenswrapper[4780]: I0219 09:31:30.473631 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlt62" podStartSLOduration=2.85545916 podStartE2EDuration="5.473606826s" podCreationTimestamp="2026-02-19 09:31:25 +0000 UTC" firstStartedPulling="2026-02-19 09:31:27.413657544 +0000 UTC m=+4230.157315003" lastFinishedPulling="2026-02-19 09:31:30.03180521 +0000 UTC m=+4232.775462669" observedRunningTime="2026-02-19 09:31:30.471205307 +0000 UTC m=+4233.214862776" watchObservedRunningTime="2026-02-19 09:31:30.473606826 +0000 UTC m=+4233.217264285" Feb 19 09:31:35 crc kubenswrapper[4780]: I0219 09:31:35.856539 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:35 crc kubenswrapper[4780]: I0219 09:31:35.857103 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:36 crc kubenswrapper[4780]: I0219 09:31:36.016340 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:36 crc kubenswrapper[4780]: I0219 09:31:36.336328 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:31:36 crc kubenswrapper[4780]: I0219 09:31:36.336430 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:31:36 crc kubenswrapper[4780]: I0219 09:31:36.567453 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:36 crc kubenswrapper[4780]: I0219 09:31:36.628614 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:38 crc kubenswrapper[4780]: I0219 09:31:38.512533 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlt62" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="registry-server" containerID="cri-o://96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07" gracePeriod=2 Feb 19 09:31:38 crc kubenswrapper[4780]: I0219 09:31:38.946506 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.023062 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pgbc\" (UniqueName: \"kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc\") pod \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.023136 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content\") pod \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.023298 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities\") pod \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\" (UID: \"ee906d66-745e-4895-9fb0-e4ba9682f0c8\") " Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.024030 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities" (OuterVolumeSpecName: "utilities") pod "ee906d66-745e-4895-9fb0-e4ba9682f0c8" (UID: "ee906d66-745e-4895-9fb0-e4ba9682f0c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.036435 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc" (OuterVolumeSpecName: "kube-api-access-7pgbc") pod "ee906d66-745e-4895-9fb0-e4ba9682f0c8" (UID: "ee906d66-745e-4895-9fb0-e4ba9682f0c8"). InnerVolumeSpecName "kube-api-access-7pgbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.079268 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee906d66-745e-4895-9fb0-e4ba9682f0c8" (UID: "ee906d66-745e-4895-9fb0-e4ba9682f0c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.125733 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.125798 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pgbc\" (UniqueName: \"kubernetes.io/projected/ee906d66-745e-4895-9fb0-e4ba9682f0c8-kube-api-access-7pgbc\") on node \"crc\" DevicePath \"\"" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.125811 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee906d66-745e-4895-9fb0-e4ba9682f0c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.523968 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerID="96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07" exitCode=0 Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.524046 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlt62" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.524035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerDied","Data":"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07"} Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.525677 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlt62" event={"ID":"ee906d66-745e-4895-9fb0-e4ba9682f0c8","Type":"ContainerDied","Data":"1c9b7e7ac86689ced4be59d216e6c9dbc2bddaa2348bc5444bd32e04d10bbbcc"} Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.525717 4780 scope.go:117] "RemoveContainer" containerID="96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.567615 4780 scope.go:117] "RemoveContainer" containerID="4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.576333 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.585002 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlt62"] Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.596101 4780 scope.go:117] "RemoveContainer" containerID="44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.629088 4780 scope.go:117] "RemoveContainer" containerID="96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07" Feb 19 09:31:39 crc kubenswrapper[4780]: E0219 09:31:39.629684 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07\": container with ID starting with 96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07 not found: ID does not exist" containerID="96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.629726 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07"} err="failed to get container status \"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07\": rpc error: code = NotFound desc = could not find container \"96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07\": container with ID starting with 96c4f508e65eec452bddc2d6cf8c25f0752aac02102832ed324a0bffcd05be07 not found: ID does not exist" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.629754 4780 scope.go:117] "RemoveContainer" containerID="4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63" Feb 19 09:31:39 crc kubenswrapper[4780]: E0219 09:31:39.630069 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63\": container with ID starting with 4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63 not found: ID does not exist" containerID="4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.630096 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63"} err="failed to get container status \"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63\": rpc error: code = NotFound desc = could not find container \"4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63\": container with ID starting with 4d131614be2f8a8caa517f0c491d76264ef433168008c43a0a79193172058a63 not found: ID does not exist" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.630115 4780 scope.go:117] "RemoveContainer" containerID="44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f" Feb 19 09:31:39 crc kubenswrapper[4780]: E0219 09:31:39.630496 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f\": container with ID starting with 44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f not found: ID does not exist" containerID="44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.630523 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f"} err="failed to get container status \"44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f\": rpc error: code = NotFound desc = could not find container \"44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f\": container with ID starting with 44e890546dc0d7adef9835202e093ad531f890c0f66210e1e10e3680b8d4cd2f not found: ID does not exist" Feb 19 09:31:39 crc kubenswrapper[4780]: I0219 09:31:39.951492 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" path="/var/lib/kubelet/pods/ee906d66-745e-4895-9fb0-e4ba9682f0c8/volumes" Feb 19 09:32:06 crc kubenswrapper[4780]: I0219 09:32:06.336792 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:32:06 crc kubenswrapper[4780]: I0219 09:32:06.337407 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:32:36 crc kubenswrapper[4780]: I0219 09:32:36.336042 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:32:36 crc kubenswrapper[4780]: I0219 09:32:36.336652 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:32:36 crc kubenswrapper[4780]: I0219 09:32:36.336707 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:32:36 crc kubenswrapper[4780]: I0219 09:32:36.337404 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:32:36 crc kubenswrapper[4780]: I0219 09:32:36.337471 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" gracePeriod=600 Feb 19 09:32:36 crc kubenswrapper[4780]: E0219 09:32:36.518891 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:32:37 crc kubenswrapper[4780]: I0219 09:32:37.049192 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" exitCode=0 Feb 19 09:32:37 crc kubenswrapper[4780]: I0219 09:32:37.049295 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec"} Feb 19 09:32:37 crc kubenswrapper[4780]: I0219 09:32:37.049345 4780 scope.go:117] "RemoveContainer" containerID="d4a8e5f63274313b74fa3af137147c3d91f1c25ef5570c8872152b48351812e1" Feb 19 09:32:37 crc kubenswrapper[4780]: I0219 09:32:37.050027 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:32:37 crc kubenswrapper[4780]: E0219 09:32:37.050787 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.319972 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:32:38 crc kubenswrapper[4780]: E0219 09:32:38.320507 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="extract-content" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.320521 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="extract-content" Feb 19 09:32:38 crc kubenswrapper[4780]: E0219 09:32:38.320534 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="extract-utilities" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.320540 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="extract-utilities" Feb 19 09:32:38 crc kubenswrapper[4780]: E0219 09:32:38.320561 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="registry-server" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.320569 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="registry-server" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.320704 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee906d66-745e-4895-9fb0-e4ba9682f0c8" containerName="registry-server" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.321594 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.345837 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.392031 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlwc\" (UniqueName: \"kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.392313 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.392466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.494005 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.494143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlwc\" (UniqueName: \"kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.494198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.494713 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.494734 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.521688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlwc\" (UniqueName: \"kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc\") pod \"redhat-operators-skn8m\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:38 crc kubenswrapper[4780]: I0219 09:32:38.657223 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:39 crc kubenswrapper[4780]: I0219 09:32:39.124334 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:32:40 crc kubenswrapper[4780]: I0219 09:32:40.083505 4780 generic.go:334] "Generic (PLEG): container finished" podID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerID="304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3" exitCode=0 Feb 19 09:32:40 crc kubenswrapper[4780]: I0219 09:32:40.083610 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerDied","Data":"304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3"} Feb 19 09:32:40 crc kubenswrapper[4780]: I0219 09:32:40.083880 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerStarted","Data":"e58bc234ee7a4fc94d31709b1a0572f1f51b1d975b0e8ed0450ccf7b46cc4d62"} Feb 19 09:32:41 crc kubenswrapper[4780]: I0219 09:32:41.094384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerStarted","Data":"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee"} Feb 19 09:32:42 crc kubenswrapper[4780]: I0219 09:32:42.107898 4780 generic.go:334] "Generic (PLEG): container finished" podID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerID="6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee" exitCode=0 Feb 19 09:32:42 crc kubenswrapper[4780]: I0219 09:32:42.108001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerDied","Data":"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee"} Feb 19 09:32:43 crc kubenswrapper[4780]: I0219 09:32:43.121245 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerStarted","Data":"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f"} Feb 19 09:32:43 crc kubenswrapper[4780]: I0219 09:32:43.144644 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skn8m" podStartSLOduration=2.718389177 podStartE2EDuration="5.144623978s" podCreationTimestamp="2026-02-19 09:32:38 +0000 UTC" firstStartedPulling="2026-02-19 09:32:40.086446991 +0000 UTC m=+4302.830104470" lastFinishedPulling="2026-02-19 09:32:42.512681792 +0000 UTC m=+4305.256339271" observedRunningTime="2026-02-19 09:32:43.144493205 +0000 UTC m=+4305.888150664" watchObservedRunningTime="2026-02-19 09:32:43.144623978 +0000 UTC m=+4305.888281427" Feb 19 09:32:48 crc kubenswrapper[4780]: I0219 09:32:48.658484 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:48 crc kubenswrapper[4780]: I0219 09:32:48.658848 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:49 crc kubenswrapper[4780]: I0219 09:32:49.729474 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skn8m" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="registry-server" probeResult="failure" output=< Feb 19 09:32:49 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 09:32:49 crc kubenswrapper[4780]: > Feb 19 09:32:49 crc kubenswrapper[4780]: I0219 09:32:49.937952 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:32:49 crc kubenswrapper[4780]: E0219 09:32:49.938266 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:32:58 crc kubenswrapper[4780]: I0219 09:32:58.736195 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:58 crc kubenswrapper[4780]: I0219 09:32:58.813698 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:32:58 crc kubenswrapper[4780]: I0219 09:32:58.980273 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:33:00 crc kubenswrapper[4780]: I0219 09:33:00.279837 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skn8m" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="registry-server" containerID="cri-o://bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f" gracePeriod=2 Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.217899 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.289820 4780 generic.go:334] "Generic (PLEG): container finished" podID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerID="bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f" exitCode=0 Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.289878 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skn8m" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.289872 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerDied","Data":"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f"} Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.290036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skn8m" event={"ID":"cb6549cc-9764-4a21-bfeb-343836927ac9","Type":"ContainerDied","Data":"e58bc234ee7a4fc94d31709b1a0572f1f51b1d975b0e8ed0450ccf7b46cc4d62"} Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.290076 4780 scope.go:117] "RemoveContainer" containerID="bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.309945 4780 scope.go:117] "RemoveContainer" containerID="6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.337418 4780 scope.go:117] "RemoveContainer" containerID="304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.356891 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmlwc\" (UniqueName: \"kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc\") pod \"cb6549cc-9764-4a21-bfeb-343836927ac9\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.356948 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities\") pod \"cb6549cc-9764-4a21-bfeb-343836927ac9\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.356972 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content\") pod \"cb6549cc-9764-4a21-bfeb-343836927ac9\" (UID: \"cb6549cc-9764-4a21-bfeb-343836927ac9\") " Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.357996 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities" (OuterVolumeSpecName: "utilities") pod "cb6549cc-9764-4a21-bfeb-343836927ac9" (UID: "cb6549cc-9764-4a21-bfeb-343836927ac9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.360247 4780 scope.go:117] "RemoveContainer" containerID="bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.360716 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:33:01 crc kubenswrapper[4780]: E0219 09:33:01.360864 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f\": container with ID starting with bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f not found: ID does not exist" containerID="bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.360912 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f"} err="failed to get container status \"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f\": rpc error: code = NotFound desc = could not find container \"bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f\": container with ID starting with bb63e7b7ed0da21633b75cda32e5699dcc9f495ed072911ee0311df34d332d5f not found: ID does not exist" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.360946 4780 scope.go:117] "RemoveContainer" containerID="6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.362461 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc" (OuterVolumeSpecName: "kube-api-access-wmlwc") pod "cb6549cc-9764-4a21-bfeb-343836927ac9" (UID: "cb6549cc-9764-4a21-bfeb-343836927ac9"). InnerVolumeSpecName "kube-api-access-wmlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:33:01 crc kubenswrapper[4780]: E0219 09:33:01.362611 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee\": container with ID starting with 6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee not found: ID does not exist" containerID="6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.362636 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee"} err="failed to get container status \"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee\": rpc error: code = NotFound desc = could not find container \"6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee\": container with ID starting with 6800c3c2e55227a6165380a77b61b73a24c11a431097355c5b49caf5b3a89fee not found: ID does not exist" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.362651 4780 scope.go:117] "RemoveContainer" containerID="304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3" Feb 19 09:33:01 crc kubenswrapper[4780]: E0219 09:33:01.363664 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3\": container with ID starting with 304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3 not found: ID does not exist" containerID="304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.363692 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3"} err="failed to get container status \"304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3\": rpc error: code = NotFound desc = could not find container \"304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3\": container with ID starting with 304f2fbc1237665422882edb918264e53e3685fde9ebdd5165e65b1b84782ea3 not found: ID does not exist" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.461955 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmlwc\" (UniqueName: \"kubernetes.io/projected/cb6549cc-9764-4a21-bfeb-343836927ac9-kube-api-access-wmlwc\") on node \"crc\" DevicePath \"\"" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.481148 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb6549cc-9764-4a21-bfeb-343836927ac9" (UID: "cb6549cc-9764-4a21-bfeb-343836927ac9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.564579 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6549cc-9764-4a21-bfeb-343836927ac9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.642525 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.653604 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skn8m"] Feb 19 09:33:01 crc kubenswrapper[4780]: I0219 09:33:01.955571 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" path="/var/lib/kubelet/pods/cb6549cc-9764-4a21-bfeb-343836927ac9/volumes" Feb 19 09:33:03 crc kubenswrapper[4780]: I0219 09:33:03.938501 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:33:03 crc kubenswrapper[4780]: E0219 09:33:03.940422 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:33:18 crc kubenswrapper[4780]: I0219 09:33:18.939246 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:33:18 crc kubenswrapper[4780]: E0219 09:33:18.940045 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:33:29 crc kubenswrapper[4780]: I0219 09:33:29.939172 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:33:29 crc kubenswrapper[4780]: E0219 09:33:29.940391 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:33:43 crc kubenswrapper[4780]: I0219 09:33:43.938079 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:33:43 crc kubenswrapper[4780]: E0219 09:33:43.939985 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:33:58 crc kubenswrapper[4780]: I0219 09:33:58.939025 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:33:58 crc kubenswrapper[4780]: E0219 09:33:58.940590 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.407198 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-xsd65"] Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.411808 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-xsd65"] Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.533451 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-pqd6k"] Feb 19 09:34:09 crc kubenswrapper[4780]: E0219 09:34:09.533825 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="extract-content" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.533863 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="extract-content" Feb 19 09:34:09 crc kubenswrapper[4780]: E0219 09:34:09.533875 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="extract-utilities" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.533881 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="extract-utilities" Feb 19 09:34:09 crc kubenswrapper[4780]: E0219 09:34:09.533929 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="registry-server" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.533937 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="registry-server" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.534104 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6549cc-9764-4a21-bfeb-343836927ac9" containerName="registry-server" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.534812 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.537089 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.537101 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r5h5z" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.537478 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.537516 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.551221 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pqd6k"] Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.610439 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.610496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.610687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfswn\" (UniqueName: \"kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.712190 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.712265 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.712382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfswn\" (UniqueName: \"kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.712461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.713622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.737609 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfswn\" (UniqueName: \"kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn\") pod \"crc-storage-crc-pqd6k\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.863895 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:09 crc kubenswrapper[4780]: I0219 09:34:09.953204 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e68e1ee-ab83-4f62-91da-1f5fd9c051e6" path="/var/lib/kubelet/pods/2e68e1ee-ab83-4f62-91da-1f5fd9c051e6/volumes" Feb 19 09:34:10 crc kubenswrapper[4780]: I0219 09:34:10.130204 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pqd6k"] Feb 19 09:34:10 crc kubenswrapper[4780]: I0219 09:34:10.939080 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pqd6k" event={"ID":"07fd0721-1058-4a26-a7e1-771328394705","Type":"ContainerStarted","Data":"86e5990bfc56ecb7d46e479729ec9707c5e698d2ab8dffba501a34d6f4c3b8d0"} Feb 19 09:34:10 crc kubenswrapper[4780]: I0219 09:34:10.939164 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pqd6k" event={"ID":"07fd0721-1058-4a26-a7e1-771328394705","Type":"ContainerStarted","Data":"e3e714d9dba73150b54c376227489cfcf3b30c7c13fb04f40f5ae3843a25dfb5"} Feb 19 09:34:10 crc kubenswrapper[4780]: I0219 09:34:10.956811 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-pqd6k" podStartSLOduration=1.488552226 podStartE2EDuration="1.956781718s" podCreationTimestamp="2026-02-19 09:34:09 +0000 UTC" firstStartedPulling="2026-02-19 09:34:10.152169729 +0000 UTC m=+4392.895827178" lastFinishedPulling="2026-02-19 09:34:10.620399181 +0000 UTC m=+4393.364056670" observedRunningTime="2026-02-19 09:34:10.95276113 +0000 UTC m=+4393.696418619" watchObservedRunningTime="2026-02-19 09:34:10.956781718 +0000 UTC m=+4393.700439177" Feb 19 09:34:11 crc kubenswrapper[4780]: I0219 09:34:11.951779 4780 generic.go:334] "Generic (PLEG): container finished" podID="07fd0721-1058-4a26-a7e1-771328394705" containerID="86e5990bfc56ecb7d46e479729ec9707c5e698d2ab8dffba501a34d6f4c3b8d0" exitCode=0 Feb 19 09:34:11 crc kubenswrapper[4780]: I0219 09:34:11.955670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pqd6k" event={"ID":"07fd0721-1058-4a26-a7e1-771328394705","Type":"ContainerDied","Data":"86e5990bfc56ecb7d46e479729ec9707c5e698d2ab8dffba501a34d6f4c3b8d0"} Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.366582 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.490810 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage\") pod \"07fd0721-1058-4a26-a7e1-771328394705\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.491038 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt\") pod \"07fd0721-1058-4a26-a7e1-771328394705\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.491200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "07fd0721-1058-4a26-a7e1-771328394705" (UID: "07fd0721-1058-4a26-a7e1-771328394705"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.491367 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfswn\" (UniqueName: \"kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn\") pod \"07fd0721-1058-4a26-a7e1-771328394705\" (UID: \"07fd0721-1058-4a26-a7e1-771328394705\") " Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.493247 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07fd0721-1058-4a26-a7e1-771328394705-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.501001 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn" (OuterVolumeSpecName: "kube-api-access-xfswn") pod "07fd0721-1058-4a26-a7e1-771328394705" (UID: "07fd0721-1058-4a26-a7e1-771328394705"). InnerVolumeSpecName "kube-api-access-xfswn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.532248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "07fd0721-1058-4a26-a7e1-771328394705" (UID: "07fd0721-1058-4a26-a7e1-771328394705"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.595591 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07fd0721-1058-4a26-a7e1-771328394705-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.595679 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfswn\" (UniqueName: \"kubernetes.io/projected/07fd0721-1058-4a26-a7e1-771328394705-kube-api-access-xfswn\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.939886 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:34:13 crc kubenswrapper[4780]: E0219 09:34:13.940418 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.971339 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pqd6k" event={"ID":"07fd0721-1058-4a26-a7e1-771328394705","Type":"ContainerDied","Data":"e3e714d9dba73150b54c376227489cfcf3b30c7c13fb04f40f5ae3843a25dfb5"} Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.971400 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e714d9dba73150b54c376227489cfcf3b30c7c13fb04f40f5ae3843a25dfb5" Feb 19 09:34:13 crc kubenswrapper[4780]: I0219 09:34:13.971409 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pqd6k" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.611948 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-pqd6k"] Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.624483 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-pqd6k"] Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.775409 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-5zqdd"] Feb 19 09:34:15 crc kubenswrapper[4780]: E0219 09:34:15.775946 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fd0721-1058-4a26-a7e1-771328394705" containerName="storage" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.775977 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fd0721-1058-4a26-a7e1-771328394705" containerName="storage" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.776355 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fd0721-1058-4a26-a7e1-771328394705" containerName="storage" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.777333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.780318 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r5h5z" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.781021 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.781389 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.781523 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.785107 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5zqdd"] Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.948506 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fd0721-1058-4a26-a7e1-771328394705" path="/var/lib/kubelet/pods/07fd0721-1058-4a26-a7e1-771328394705/volumes" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.949777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.949913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pr6\" (UniqueName: \"kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:15 crc kubenswrapper[4780]: I0219 09:34:15.950277 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.052737 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.052951 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.053315 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.053898 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pr6\" (UniqueName: \"kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.054497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.091173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pr6\" (UniqueName: \"kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6\") pod \"crc-storage-crc-5zqdd\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.098978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:16 crc kubenswrapper[4780]: I0219 09:34:16.556742 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5zqdd"] Feb 19 09:34:17 crc kubenswrapper[4780]: I0219 09:34:17.007855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5zqdd" event={"ID":"c87e7374-e466-4098-b203-3b57ba08eaa8","Type":"ContainerStarted","Data":"70c802743d4c6cf6d2062bca69bbe009a2d3cdf882103c44e0b95c5ec3831cdb"} Feb 19 09:34:18 crc kubenswrapper[4780]: I0219 09:34:18.026536 4780 generic.go:334] "Generic (PLEG): container finished" podID="c87e7374-e466-4098-b203-3b57ba08eaa8" containerID="075e9322e83fcae9b8b7130b2181a6c3c569836deabae7ec8d7ddc2d13d7acb5" exitCode=0 Feb 19 09:34:18 crc kubenswrapper[4780]: I0219 09:34:18.026648 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5zqdd" event={"ID":"c87e7374-e466-4098-b203-3b57ba08eaa8","Type":"ContainerDied","Data":"075e9322e83fcae9b8b7130b2181a6c3c569836deabae7ec8d7ddc2d13d7acb5"} Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.028467 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.053070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5zqdd" event={"ID":"c87e7374-e466-4098-b203-3b57ba08eaa8","Type":"ContainerDied","Data":"70c802743d4c6cf6d2062bca69bbe009a2d3cdf882103c44e0b95c5ec3831cdb"} Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.053147 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c802743d4c6cf6d2062bca69bbe009a2d3cdf882103c44e0b95c5ec3831cdb" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.053234 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5zqdd" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.124161 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage\") pod \"c87e7374-e466-4098-b203-3b57ba08eaa8\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.124285 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9pr6\" (UniqueName: \"kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6\") pod \"c87e7374-e466-4098-b203-3b57ba08eaa8\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.124382 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt\") pod \"c87e7374-e466-4098-b203-3b57ba08eaa8\" (UID: \"c87e7374-e466-4098-b203-3b57ba08eaa8\") " Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.124610 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "c87e7374-e466-4098-b203-3b57ba08eaa8" (UID: "c87e7374-e466-4098-b203-3b57ba08eaa8"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.124913 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c87e7374-e466-4098-b203-3b57ba08eaa8-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.132086 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6" (OuterVolumeSpecName: "kube-api-access-z9pr6") pod "c87e7374-e466-4098-b203-3b57ba08eaa8" (UID: "c87e7374-e466-4098-b203-3b57ba08eaa8"). InnerVolumeSpecName "kube-api-access-z9pr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.159366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "c87e7374-e466-4098-b203-3b57ba08eaa8" (UID: "c87e7374-e466-4098-b203-3b57ba08eaa8"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.226749 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c87e7374-e466-4098-b203-3b57ba08eaa8-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:20 crc kubenswrapper[4780]: I0219 09:34:20.226831 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9pr6\" (UniqueName: \"kubernetes.io/projected/c87e7374-e466-4098-b203-3b57ba08eaa8-kube-api-access-z9pr6\") on node \"crc\" DevicePath \"\"" Feb 19 09:34:24 crc kubenswrapper[4780]: I0219 09:34:24.938484 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:34:24 crc kubenswrapper[4780]: E0219 09:34:24.939080 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:34:35 crc kubenswrapper[4780]: I0219 09:34:35.938904 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:34:35 crc kubenswrapper[4780]: E0219 09:34:35.941602 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:34:50 crc kubenswrapper[4780]: I0219 09:34:50.938699 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:34:50 crc kubenswrapper[4780]: E0219 09:34:50.939527 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:35:04 crc kubenswrapper[4780]: I0219 09:35:04.547277 4780 scope.go:117] "RemoveContainer" containerID="a1bf9a294a41019bbb79d5fb1278795d943b9f54258bb31a18bfe3714e600b5d" Feb 19 09:35:05 crc kubenswrapper[4780]: I0219 09:35:05.938844 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:35:05 crc kubenswrapper[4780]: E0219 09:35:05.939073 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:35:16 crc kubenswrapper[4780]: I0219 09:35:16.938838 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:35:16 crc kubenswrapper[4780]: E0219 09:35:16.939740 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:35:28 crc kubenswrapper[4780]: I0219 09:35:28.938664 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:35:28 crc kubenswrapper[4780]: E0219 09:35:28.939407 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:35:39 crc kubenswrapper[4780]: I0219 09:35:39.941630 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:35:39 crc kubenswrapper[4780]: E0219 09:35:39.942873 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:35:54 crc kubenswrapper[4780]: I0219 09:35:54.938811 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:35:54 crc kubenswrapper[4780]: E0219 09:35:54.940029 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:36:06 crc kubenswrapper[4780]: I0219 09:36:06.938592 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:36:06 crc kubenswrapper[4780]: E0219 09:36:06.939385 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.672938 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:16 crc kubenswrapper[4780]: E0219 09:36:16.673737 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87e7374-e466-4098-b203-3b57ba08eaa8" containerName="storage" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.673760 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87e7374-e466-4098-b203-3b57ba08eaa8" containerName="storage" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.674006 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87e7374-e466-4098-b203-3b57ba08eaa8" containerName="storage" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.675547 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.692531 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.795202 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.795267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zh6\" (UniqueName: \"kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.795306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.897354 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.897407 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4zh6\" (UniqueName: \"kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.897439 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.897939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.898202 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:16 crc kubenswrapper[4780]: I0219 09:36:16.927036 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4zh6\" (UniqueName: \"kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6\") pod \"community-operators-wg9dd\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:17 crc kubenswrapper[4780]: I0219 09:36:17.024653 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:17 crc kubenswrapper[4780]: I0219 09:36:17.535453 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:18 crc kubenswrapper[4780]: I0219 09:36:18.122264 4780 generic.go:334] "Generic (PLEG): container finished" podID="1c0271da-38da-457a-8f45-3e3e904dd524" containerID="d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f" exitCode=0 Feb 19 09:36:18 crc kubenswrapper[4780]: I0219 09:36:18.122409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerDied","Data":"d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f"} Feb 19 09:36:18 crc kubenswrapper[4780]: I0219 09:36:18.122580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerStarted","Data":"043a33c3a031cba70477a80015f058554a4ce46afc6319966bb550ce2e90e3a6"} Feb 19 09:36:18 crc kubenswrapper[4780]: I0219 09:36:18.124732 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:36:19 crc kubenswrapper[4780]: I0219 09:36:19.131572 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerStarted","Data":"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a"} Feb 19 09:36:19 crc kubenswrapper[4780]: I0219 09:36:19.941920 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:36:19 crc kubenswrapper[4780]: E0219 09:36:19.942289 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:36:20 crc kubenswrapper[4780]: I0219 09:36:20.142270 4780 generic.go:334] "Generic (PLEG): container finished" podID="1c0271da-38da-457a-8f45-3e3e904dd524" containerID="6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a" exitCode=0 Feb 19 09:36:20 crc kubenswrapper[4780]: I0219 09:36:20.142321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerDied","Data":"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a"} Feb 19 09:36:21 crc kubenswrapper[4780]: I0219 09:36:21.158315 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerStarted","Data":"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b"} Feb 19 09:36:21 crc kubenswrapper[4780]: I0219 09:36:21.191713 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wg9dd" podStartSLOduration=2.745233376 podStartE2EDuration="5.191687311s" podCreationTimestamp="2026-02-19 09:36:16 +0000 UTC" firstStartedPulling="2026-02-19 09:36:18.12436786 +0000 UTC m=+4520.868025319" lastFinishedPulling="2026-02-19 09:36:20.570821805 +0000 UTC m=+4523.314479254" observedRunningTime="2026-02-19 09:36:21.185959511 +0000 UTC m=+4523.929617050" watchObservedRunningTime="2026-02-19 09:36:21.191687311 +0000 UTC m=+4523.935344770" Feb 19 09:36:27 crc kubenswrapper[4780]: I0219 09:36:27.025414 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:27 crc kubenswrapper[4780]: I0219 09:36:27.026171 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:27 crc kubenswrapper[4780]: I0219 09:36:27.110842 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:27 crc kubenswrapper[4780]: I0219 09:36:27.275648 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:27 crc kubenswrapper[4780]: I0219 09:36:27.356662 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.227633 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wg9dd" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="registry-server" containerID="cri-o://5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b" gracePeriod=2 Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.664938 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.827453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4zh6\" (UniqueName: \"kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6\") pod \"1c0271da-38da-457a-8f45-3e3e904dd524\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.827568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities\") pod \"1c0271da-38da-457a-8f45-3e3e904dd524\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.827677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content\") pod \"1c0271da-38da-457a-8f45-3e3e904dd524\" (UID: \"1c0271da-38da-457a-8f45-3e3e904dd524\") " Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.829461 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities" (OuterVolumeSpecName: "utilities") pod "1c0271da-38da-457a-8f45-3e3e904dd524" (UID: "1c0271da-38da-457a-8f45-3e3e904dd524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.834370 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6" (OuterVolumeSpecName: "kube-api-access-k4zh6") pod "1c0271da-38da-457a-8f45-3e3e904dd524" (UID: "1c0271da-38da-457a-8f45-3e3e904dd524"). InnerVolumeSpecName "kube-api-access-k4zh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.886755 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c0271da-38da-457a-8f45-3e3e904dd524" (UID: "1c0271da-38da-457a-8f45-3e3e904dd524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.929158 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4zh6\" (UniqueName: \"kubernetes.io/projected/1c0271da-38da-457a-8f45-3e3e904dd524-kube-api-access-k4zh6\") on node \"crc\" DevicePath \"\"" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.929194 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:36:29 crc kubenswrapper[4780]: I0219 09:36:29.929207 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0271da-38da-457a-8f45-3e3e904dd524-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.236658 4780 generic.go:334] "Generic (PLEG): container finished" podID="1c0271da-38da-457a-8f45-3e3e904dd524" containerID="5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b" exitCode=0 Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.236704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerDied","Data":"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b"} Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.236735 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg9dd" event={"ID":"1c0271da-38da-457a-8f45-3e3e904dd524","Type":"ContainerDied","Data":"043a33c3a031cba70477a80015f058554a4ce46afc6319966bb550ce2e90e3a6"} Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.236754 4780 scope.go:117] "RemoveContainer" containerID="5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.236764 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg9dd" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.259029 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.266085 4780 scope.go:117] "RemoveContainer" containerID="6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.271381 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wg9dd"] Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.285679 4780 scope.go:117] "RemoveContainer" containerID="d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.326184 4780 scope.go:117] "RemoveContainer" containerID="5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b" Feb 19 09:36:30 crc kubenswrapper[4780]: E0219 09:36:30.326823 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b\": container with ID starting with 5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b not found: ID does not exist" containerID="5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.326889 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b"} err="failed to get container status \"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b\": rpc error: code = NotFound desc = could not find container \"5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b\": container with ID starting with 5689b2469a08297e666e5835b318e29513c376eb10d01af115c9f89cc38ddf4b not found: ID does not exist" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.326924 4780 scope.go:117] "RemoveContainer" containerID="6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a" Feb 19 09:36:30 crc kubenswrapper[4780]: E0219 09:36:30.327522 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a\": container with ID starting with 6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a not found: ID does not exist" containerID="6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.327559 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a"} err="failed to get container status \"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a\": rpc error: code = NotFound desc = could not find container \"6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a\": container with ID starting with 6fb7a8be2a6f2821fe05d0e677b4366409583c203320d5d869fef6195f02dd1a not found: ID does not exist" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.327580 4780 scope.go:117] "RemoveContainer" containerID="d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f" Feb 19 09:36:30 crc kubenswrapper[4780]: E0219 09:36:30.328168 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f\": container with ID starting with d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f not found: ID does not exist" containerID="d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f" Feb 19 09:36:30 crc kubenswrapper[4780]: I0219 09:36:30.328247 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f"} err="failed to get container status \"d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f\": rpc error: code = NotFound desc = could not find container \"d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f\": container with ID starting with d0e6fb319f0c8f9f7b9d8d5d71ae78cfe18256b80c537517ad04f1e313d76e9f not found: ID does not exist" Feb 19 09:36:31 crc kubenswrapper[4780]: I0219 09:36:31.947536 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" path="/var/lib/kubelet/pods/1c0271da-38da-457a-8f45-3e3e904dd524/volumes" Feb 19 09:36:32 crc kubenswrapper[4780]: I0219 09:36:32.938855 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:36:32 crc kubenswrapper[4780]: E0219 09:36:32.939538 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:36:46 crc kubenswrapper[4780]: I0219 09:36:46.938951 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:36:46 crc kubenswrapper[4780]: E0219 09:36:46.940720 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:36:59 crc kubenswrapper[4780]: I0219 09:36:59.939022 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:36:59 crc kubenswrapper[4780]: E0219 09:36:59.940660 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:37:10 crc kubenswrapper[4780]: I0219 09:37:10.938005 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:37:10 crc kubenswrapper[4780]: E0219 09:37:10.938788 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:37:24 crc kubenswrapper[4780]: I0219 09:37:24.938867 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:37:24 crc kubenswrapper[4780]: E0219 09:37:24.940167 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.847483 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:31 crc kubenswrapper[4780]: E0219 09:37:31.848758 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="extract-utilities" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.848778 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="extract-utilities" Feb 19 09:37:31 crc kubenswrapper[4780]: E0219 09:37:31.848804 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="registry-server" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.848810 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="registry-server" Feb 19 09:37:31 crc kubenswrapper[4780]: E0219 09:37:31.848821 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="extract-content" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.848828 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="extract-content" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.848959 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0271da-38da-457a-8f45-3e3e904dd524" containerName="registry-server" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.849804 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.853439 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.853729 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.853744 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.853897 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.854083 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xtg8s" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.854171 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.897669 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.897711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jwv\" (UniqueName: \"kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.897770 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:31 crc kubenswrapper[4780]: I0219 09:37:31.999597 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:31.999705 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:31.999733 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jwv\" (UniqueName: \"kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.000527 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.001150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.024698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jwv\" (UniqueName: \"kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv\") pod \"dnsmasq-dns-7c4c8f55b5-2764r\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.125002 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.126153 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.138495 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.168555 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.202551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.202767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmssj\" (UniqueName: \"kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.202798 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.303647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.303695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmssj\" (UniqueName: \"kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.303721 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.304576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.305054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.324307 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmssj\" (UniqueName: \"kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj\") pod \"dnsmasq-dns-589cf688cc-mlkrr\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.444278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.636451 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.750585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" event={"ID":"5f242ad6-7ab9-448b-93a2-2726c4bff6b4","Type":"ContainerStarted","Data":"7f16d69c694fe3396d22bccad6bb19a6bcc0e500ad40635a33824ec3a749f6e0"} Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.925060 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.994158 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:37:32 crc kubenswrapper[4780]: I0219 09:37:32.996163 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.001201 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.001251 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.001321 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nzkd5" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.001251 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.001888 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.013265 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114228 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114269 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114290 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhs8\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114312 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114329 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114367 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114400 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.114427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.215513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.215609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.215676 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.215778 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whhs8\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.215811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216327 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216501 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.216739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.217889 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.218189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.220875 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.220927 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.221013 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.222809 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.222832 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/859d86eeb9b02e5ed8939e930770c13f548056d433f31c6b511a11aad30e4f63/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.253937 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whhs8\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.270648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.281479 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.282977 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.288287 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.288658 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.288838 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.289044 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.289227 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2x24p" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.303344 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.383104 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419561 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419886 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snzq\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419933 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419957 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.419994 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.420054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.420079 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.420109 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.521276 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.521346 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.521369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.521859 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.521387 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.522428 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.522451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.522480 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.522508 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.522533 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snzq\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.523367 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.523689 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.523861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.527394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.527449 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.527776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.535214 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.535271 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0aaac4a1bd488f11e190d1571579db07d43faecf57989f10e073ca6cd80540f6/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.541503 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snzq\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.577318 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.610251 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.771922 4780 generic.go:334] "Generic (PLEG): container finished" podID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerID="8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d" exitCode=0 Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.772253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" event={"ID":"5f242ad6-7ab9-448b-93a2-2726c4bff6b4","Type":"ContainerDied","Data":"8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d"} Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.777300 4780 generic.go:334] "Generic (PLEG): container finished" podID="34f992ca-4984-424e-9316-f12a913d2ac3" containerID="96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590" exitCode=0 Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.777327 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" event={"ID":"34f992ca-4984-424e-9316-f12a913d2ac3","Type":"ContainerDied","Data":"96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590"} Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.777342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" event={"ID":"34f992ca-4984-424e-9316-f12a913d2ac3","Type":"ContainerStarted","Data":"5950b63beec663cff745704c2ceadfd29194e47118db42374fc8e9c26e751325"} Feb 19 09:37:33 crc kubenswrapper[4780]: W0219 09:37:33.816800 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f484f2_c2e8_48de_a624_2d083d40aae5.slice/crio-9d2ead72422d77ee6ce3b11364a5e949ac0ad9db4f6d8ba047fa2c64302dfadf WatchSource:0}: Error finding container 9d2ead72422d77ee6ce3b11364a5e949ac0ad9db4f6d8ba047fa2c64302dfadf: Status 404 returned error can't find the container with id 9d2ead72422d77ee6ce3b11364a5e949ac0ad9db4f6d8ba047fa2c64302dfadf Feb 19 09:37:33 crc kubenswrapper[4780]: I0219 09:37:33.819768 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.064379 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: W0219 09:37:34.065851 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe1f9364_21cb_4a1d_9d8d_dbf4ad693f03.slice/crio-1d57184d723a4b633392184c2660432b953231c09a4fd549e3d62f2018a045cc WatchSource:0}: Error finding container 1d57184d723a4b633392184c2660432b953231c09a4fd549e3d62f2018a045cc: Status 404 returned error can't find the container with id 1d57184d723a4b633392184c2660432b953231c09a4fd549e3d62f2018a045cc Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.376579 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.379778 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.382916 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.383286 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.383615 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ct9jz" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.383881 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.388673 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.392726 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.541621 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.541706 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.541791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.542027 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.542332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjpf\" (UniqueName: \"kubernetes.io/projected/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kube-api-access-bhjpf\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.542518 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.542738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be49cefe-7b70-414c-9bc6-953625914910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be49cefe-7b70-414c-9bc6-953625914910\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.542884 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be49cefe-7b70-414c-9bc6-953625914910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be49cefe-7b70-414c-9bc6-953625914910\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644469 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644529 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644558 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644597 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644649 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjpf\" (UniqueName: \"kubernetes.io/projected/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kube-api-access-bhjpf\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.644688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.645404 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.645895 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.646090 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.647237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2a3f34-a456-4b03-bdea-0493bcb47f00-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.668977 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.669286 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.669332 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be49cefe-7b70-414c-9bc6-953625914910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be49cefe-7b70-414c-9bc6-953625914910\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f485b1a4b211ff85d459329a0397a1005b8d4f750b400306f9157e5a041b6551/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.669011 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3f34-a456-4b03-bdea-0493bcb47f00-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.747569 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.750189 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.752718 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.754863 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7c886" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.765903 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.769487 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjpf\" (UniqueName: \"kubernetes.io/projected/6c2a3f34-a456-4b03-bdea-0493bcb47f00-kube-api-access-bhjpf\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.787283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" event={"ID":"34f992ca-4984-424e-9316-f12a913d2ac3","Type":"ContainerStarted","Data":"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6"} Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.787364 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.788499 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerStarted","Data":"1d57184d723a4b633392184c2660432b953231c09a4fd549e3d62f2018a045cc"} Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.791411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" event={"ID":"5f242ad6-7ab9-448b-93a2-2726c4bff6b4","Type":"ContainerStarted","Data":"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e"} Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.791573 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.793756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerStarted","Data":"f593f0cc59ff9ef81ba9e451ad2b558cd27621e7d02e1e27f4a7e8df51e9293a"} Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.793781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerStarted","Data":"9d2ead72422d77ee6ce3b11364a5e949ac0ad9db4f6d8ba047fa2c64302dfadf"} Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.806304 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be49cefe-7b70-414c-9bc6-953625914910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be49cefe-7b70-414c-9bc6-953625914910\") pod \"openstack-galera-0\" (UID: \"6c2a3f34-a456-4b03-bdea-0493bcb47f00\") " pod="openstack/openstack-galera-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.810438 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" podStartSLOduration=2.8104192059999997 podStartE2EDuration="2.810419206s" podCreationTimestamp="2026-02-19 09:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:37:34.806605643 +0000 UTC m=+4597.550263102" watchObservedRunningTime="2026-02-19 09:37:34.810419206 +0000 UTC m=+4597.554076655" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.837807 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" podStartSLOduration=3.837786394 podStartE2EDuration="3.837786394s" podCreationTimestamp="2026-02-19 09:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:37:34.828707252 +0000 UTC m=+4597.572364701" watchObservedRunningTime="2026-02-19 09:37:34.837786394 +0000 UTC m=+4597.581443843" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.847217 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w2fz\" (UniqueName: \"kubernetes.io/projected/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kube-api-access-7w2fz\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.847258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kolla-config\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.847293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-config-data\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.948661 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w2fz\" (UniqueName: \"kubernetes.io/projected/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kube-api-access-7w2fz\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.949248 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kolla-config\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.949398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-config-data\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.950092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kolla-config\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.950407 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-config-data\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:34 crc kubenswrapper[4780]: I0219 09:37:34.964150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w2fz\" (UniqueName: \"kubernetes.io/projected/ae4a358a-9b1f-47a2-9e43-bed0e117ff1d-kube-api-access-7w2fz\") pod \"memcached-0\" (UID: \"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d\") " pod="openstack/memcached-0" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.051654 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.072449 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.481223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.571945 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 09:37:35 crc kubenswrapper[4780]: W0219 09:37:35.587851 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2a3f34_a456_4b03_bdea_0493bcb47f00.slice/crio-90dad7ec4f5a101871162cd5f84863564c19fe21393508f9b43934aa4245e782 WatchSource:0}: Error finding container 90dad7ec4f5a101871162cd5f84863564c19fe21393508f9b43934aa4245e782: Status 404 returned error can't find the container with id 90dad7ec4f5a101871162cd5f84863564c19fe21393508f9b43934aa4245e782 Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.803862 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerStarted","Data":"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b"} Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.805844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d","Type":"ContainerStarted","Data":"8bfd2ba4b63bb9e28c4d05b3b65558def833ff854087be5ed3bc70dc3512eb91"} Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.805885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ae4a358a-9b1f-47a2-9e43-bed0e117ff1d","Type":"ContainerStarted","Data":"4a638feb57192d7106709a9f46c9e9b21ab3fd0534fd8a503117f2fee81b5007"} Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.805969 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.807724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c2a3f34-a456-4b03-bdea-0493bcb47f00","Type":"ContainerStarted","Data":"d0ee3f63d3122a96a45027232c9ae98405ddddeaa59c2ea051aa48e3c025f116"} Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.807750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c2a3f34-a456-4b03-bdea-0493bcb47f00","Type":"ContainerStarted","Data":"90dad7ec4f5a101871162cd5f84863564c19fe21393508f9b43934aa4245e782"} Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.929323 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.929303395 podStartE2EDuration="1.929303395s" podCreationTimestamp="2026-02-19 09:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:37:35.867994128 +0000 UTC m=+4598.611651597" watchObservedRunningTime="2026-02-19 09:37:35.929303395 +0000 UTC m=+4598.672960844" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.932138 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.933301 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.935169 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.935480 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-c5kgn" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.935676 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.936261 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 09:37:35 crc kubenswrapper[4780]: I0219 09:37:35.948322 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.073369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.073450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-596db864-453c-4c59-baac-2f06e406d562\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-596db864-453c-4c59-baac-2f06e406d562\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.073503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.073838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.073944 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.074076 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.074252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qwc\" (UniqueName: \"kubernetes.io/projected/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kube-api-access-n7qwc\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.074315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175207 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175268 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175309 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175389 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qwc\" (UniqueName: \"kubernetes.io/projected/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kube-api-access-n7qwc\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175415 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.175473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-596db864-453c-4c59-baac-2f06e406d562\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-596db864-453c-4c59-baac-2f06e406d562\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.176307 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.176794 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.176976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.178236 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.179731 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.179774 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-596db864-453c-4c59-baac-2f06e406d562\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-596db864-453c-4c59-baac-2f06e406d562\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c5793314de2963cf8daa01912faad69be6fdf405a32800d8f85e64892c04adda/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.180834 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.181415 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.215663 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qwc\" (UniqueName: \"kubernetes.io/projected/9826a80d-cdd0-4ed4-b32a-6a25d2979e68-kube-api-access-n7qwc\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.239872 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-596db864-453c-4c59-baac-2f06e406d562\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-596db864-453c-4c59-baac-2f06e406d562\") pod \"openstack-cell1-galera-0\" (UID: \"9826a80d-cdd0-4ed4-b32a-6a25d2979e68\") " pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.248396 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.723022 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 09:37:36 crc kubenswrapper[4780]: W0219 09:37:36.727884 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9826a80d_cdd0_4ed4_b32a_6a25d2979e68.slice/crio-6f3fc9fe045c5705bc0d661a121837046f64d9d3998c07405521b81328f1c05c WatchSource:0}: Error finding container 6f3fc9fe045c5705bc0d661a121837046f64d9d3998c07405521b81328f1c05c: Status 404 returned error can't find the container with id 6f3fc9fe045c5705bc0d661a121837046f64d9d3998c07405521b81328f1c05c Feb 19 09:37:36 crc kubenswrapper[4780]: I0219 09:37:36.819968 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9826a80d-cdd0-4ed4-b32a-6a25d2979e68","Type":"ContainerStarted","Data":"6f3fc9fe045c5705bc0d661a121837046f64d9d3998c07405521b81328f1c05c"} Feb 19 09:37:37 crc kubenswrapper[4780]: I0219 09:37:37.831731 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9826a80d-cdd0-4ed4-b32a-6a25d2979e68","Type":"ContainerStarted","Data":"8b512c6917e7d9f88377a74df6ee1bcd5ce45c9c39f81a0cac4153ba4e357660"} Feb 19 09:37:39 crc kubenswrapper[4780]: I0219 09:37:39.856002 4780 generic.go:334] "Generic (PLEG): container finished" podID="6c2a3f34-a456-4b03-bdea-0493bcb47f00" containerID="d0ee3f63d3122a96a45027232c9ae98405ddddeaa59c2ea051aa48e3c025f116" exitCode=0 Feb 19 09:37:39 crc kubenswrapper[4780]: I0219 09:37:39.856096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c2a3f34-a456-4b03-bdea-0493bcb47f00","Type":"ContainerDied","Data":"d0ee3f63d3122a96a45027232c9ae98405ddddeaa59c2ea051aa48e3c025f116"} Feb 19 09:37:39 crc kubenswrapper[4780]: I0219 09:37:39.938019 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:37:40 crc kubenswrapper[4780]: I0219 09:37:40.074308 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 09:37:40 crc kubenswrapper[4780]: I0219 09:37:40.872242 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c2a3f34-a456-4b03-bdea-0493bcb47f00","Type":"ContainerStarted","Data":"32d65b6e9f4366103329944bdaef7a934c70289b0945b3a06ca35825a57d8af2"} Feb 19 09:37:40 crc kubenswrapper[4780]: I0219 09:37:40.878311 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2"} Feb 19 09:37:40 crc kubenswrapper[4780]: I0219 09:37:40.906807 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.906781303 podStartE2EDuration="7.906781303s" podCreationTimestamp="2026-02-19 09:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:37:40.895415186 +0000 UTC m=+4603.639072675" watchObservedRunningTime="2026-02-19 09:37:40.906781303 +0000 UTC m=+4603.650438792" Feb 19 09:37:42 crc kubenswrapper[4780]: I0219 09:37:42.170968 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:42 crc kubenswrapper[4780]: I0219 09:37:42.446404 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:37:42 crc kubenswrapper[4780]: I0219 09:37:42.504033 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:42 crc kubenswrapper[4780]: I0219 09:37:42.895388 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="dnsmasq-dns" containerID="cri-o://b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e" gracePeriod=10 Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.385458 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.510064 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55jwv\" (UniqueName: \"kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv\") pod \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.510198 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc\") pod \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.510270 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config\") pod \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\" (UID: \"5f242ad6-7ab9-448b-93a2-2726c4bff6b4\") " Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.515305 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv" (OuterVolumeSpecName: "kube-api-access-55jwv") pod "5f242ad6-7ab9-448b-93a2-2726c4bff6b4" (UID: "5f242ad6-7ab9-448b-93a2-2726c4bff6b4"). InnerVolumeSpecName "kube-api-access-55jwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.543430 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f242ad6-7ab9-448b-93a2-2726c4bff6b4" (UID: "5f242ad6-7ab9-448b-93a2-2726c4bff6b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.548916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config" (OuterVolumeSpecName: "config") pod "5f242ad6-7ab9-448b-93a2-2726c4bff6b4" (UID: "5f242ad6-7ab9-448b-93a2-2726c4bff6b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.612407 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.612437 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55jwv\" (UniqueName: \"kubernetes.io/projected/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-kube-api-access-55jwv\") on node \"crc\" DevicePath \"\"" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.612448 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f242ad6-7ab9-448b-93a2-2726c4bff6b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.916653 4780 generic.go:334] "Generic (PLEG): container finished" podID="9826a80d-cdd0-4ed4-b32a-6a25d2979e68" containerID="8b512c6917e7d9f88377a74df6ee1bcd5ce45c9c39f81a0cac4153ba4e357660" exitCode=0 Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.917100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9826a80d-cdd0-4ed4-b32a-6a25d2979e68","Type":"ContainerDied","Data":"8b512c6917e7d9f88377a74df6ee1bcd5ce45c9c39f81a0cac4153ba4e357660"} Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.927739 4780 generic.go:334] "Generic (PLEG): container finished" podID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerID="b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e" exitCode=0 Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.927818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" event={"ID":"5f242ad6-7ab9-448b-93a2-2726c4bff6b4","Type":"ContainerDied","Data":"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e"} Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.927883 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" event={"ID":"5f242ad6-7ab9-448b-93a2-2726c4bff6b4","Type":"ContainerDied","Data":"7f16d69c694fe3396d22bccad6bb19a6bcc0e500ad40635a33824ec3a749f6e0"} Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.927916 4780 scope.go:117] "RemoveContainer" containerID="b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.928231 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-2764r" Feb 19 09:37:43 crc kubenswrapper[4780]: I0219 09:37:43.982810 4780 scope.go:117] "RemoveContainer" containerID="8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d" Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.004281 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.010345 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-2764r"] Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.017518 4780 scope.go:117] "RemoveContainer" containerID="b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e" Feb 19 09:37:44 crc kubenswrapper[4780]: E0219 09:37:44.017976 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e\": container with ID starting with b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e not found: ID does not exist" containerID="b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e" Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.018027 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e"} err="failed to get container status \"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e\": rpc error: code = NotFound desc = could not find container \"b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e\": container with ID starting with b96a0f1294a1fe297bc76e076f0dabf1bf7e56c3aa2e0672e3d7d3c03be1f14e not found: ID does not exist" Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.018059 4780 scope.go:117] "RemoveContainer" containerID="8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d" Feb 19 09:37:44 crc kubenswrapper[4780]: E0219 09:37:44.018444 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d\": container with ID starting with 8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d not found: ID does not exist" containerID="8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d" Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.018471 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d"} err="failed to get container status \"8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d\": rpc error: code = NotFound desc = could not find container \"8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d\": container with ID starting with 8f35141424a6b900848d7d3d427304158e63a1c7542f2c2656cf7df738a0147d not found: ID does not exist" Feb 19 09:37:44 crc kubenswrapper[4780]: E0219 09:37:44.111975 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f242ad6_7ab9_448b_93a2_2726c4bff6b4.slice/crio-7f16d69c694fe3396d22bccad6bb19a6bcc0e500ad40635a33824ec3a749f6e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f242ad6_7ab9_448b_93a2_2726c4bff6b4.slice\": RecentStats: unable to find data in memory cache]" Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.937280 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9826a80d-cdd0-4ed4-b32a-6a25d2979e68","Type":"ContainerStarted","Data":"350de28f667c8fbfd46a6fe77caee28b9b6aed65b9e92bc52dc6e41a677edf54"} Feb 19 09:37:44 crc kubenswrapper[4780]: I0219 09:37:44.982248 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.982219224 podStartE2EDuration="10.982219224s" podCreationTimestamp="2026-02-19 09:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:37:44.971483182 +0000 UTC m=+4607.715140661" watchObservedRunningTime="2026-02-19 09:37:44.982219224 +0000 UTC m=+4607.725876703" Feb 19 09:37:45 crc kubenswrapper[4780]: I0219 09:37:45.052667 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 09:37:45 crc kubenswrapper[4780]: I0219 09:37:45.052968 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 09:37:45 crc kubenswrapper[4780]: I0219 09:37:45.588763 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 09:37:45 crc kubenswrapper[4780]: I0219 09:37:45.958165 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" path="/var/lib/kubelet/pods/5f242ad6-7ab9-448b-93a2-2726c4bff6b4/volumes" Feb 19 09:37:46 crc kubenswrapper[4780]: I0219 09:37:46.057934 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 09:37:46 crc kubenswrapper[4780]: I0219 09:37:46.248919 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:46 crc kubenswrapper[4780]: I0219 09:37:46.249075 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:50 crc kubenswrapper[4780]: I0219 09:37:50.476241 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:50 crc kubenswrapper[4780]: I0219 09:37:50.559266 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.362701 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xmfm9"] Feb 19 09:37:53 crc kubenswrapper[4780]: E0219 09:37:53.363369 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="dnsmasq-dns" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.363385 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="dnsmasq-dns" Feb 19 09:37:53 crc kubenswrapper[4780]: E0219 09:37:53.363414 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="init" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.363423 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="init" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.363574 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f242ad6-7ab9-448b-93a2-2726c4bff6b4" containerName="dnsmasq-dns" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.364164 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.367775 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.372600 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xmfm9"] Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.493207 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwct7\" (UniqueName: \"kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.493423 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.595459 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwct7\" (UniqueName: \"kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.595577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.597170 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.633491 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwct7\" (UniqueName: \"kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7\") pod \"root-account-create-update-xmfm9\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:53 crc kubenswrapper[4780]: I0219 09:37:53.699675 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:54 crc kubenswrapper[4780]: I0219 09:37:54.223620 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xmfm9"] Feb 19 09:37:54 crc kubenswrapper[4780]: W0219 09:37:54.225068 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56257f99_b26e_494a_ac18_d12ab8a71f65.slice/crio-16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3 WatchSource:0}: Error finding container 16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3: Status 404 returned error can't find the container with id 16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3 Feb 19 09:37:55 crc kubenswrapper[4780]: I0219 09:37:55.024407 4780 generic.go:334] "Generic (PLEG): container finished" podID="56257f99-b26e-494a-ac18-d12ab8a71f65" containerID="4b11a85e2080d14ab45111bfc7fc9f408495ed569e77bd423273fe03fe53d8ce" exitCode=0 Feb 19 09:37:55 crc kubenswrapper[4780]: I0219 09:37:55.024512 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmfm9" event={"ID":"56257f99-b26e-494a-ac18-d12ab8a71f65","Type":"ContainerDied","Data":"4b11a85e2080d14ab45111bfc7fc9f408495ed569e77bd423273fe03fe53d8ce"} Feb 19 09:37:55 crc kubenswrapper[4780]: I0219 09:37:55.024938 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmfm9" event={"ID":"56257f99-b26e-494a-ac18-d12ab8a71f65","Type":"ContainerStarted","Data":"16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3"} Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.420232 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.539347 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts\") pod \"56257f99-b26e-494a-ac18-d12ab8a71f65\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.539569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwct7\" (UniqueName: \"kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7\") pod \"56257f99-b26e-494a-ac18-d12ab8a71f65\" (UID: \"56257f99-b26e-494a-ac18-d12ab8a71f65\") " Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.540199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56257f99-b26e-494a-ac18-d12ab8a71f65" (UID: "56257f99-b26e-494a-ac18-d12ab8a71f65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.545338 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7" (OuterVolumeSpecName: "kube-api-access-vwct7") pod "56257f99-b26e-494a-ac18-d12ab8a71f65" (UID: "56257f99-b26e-494a-ac18-d12ab8a71f65"). InnerVolumeSpecName "kube-api-access-vwct7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.641970 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwct7\" (UniqueName: \"kubernetes.io/projected/56257f99-b26e-494a-ac18-d12ab8a71f65-kube-api-access-vwct7\") on node \"crc\" DevicePath \"\"" Feb 19 09:37:56 crc kubenswrapper[4780]: I0219 09:37:56.642022 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56257f99-b26e-494a-ac18-d12ab8a71f65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:37:57 crc kubenswrapper[4780]: I0219 09:37:57.045206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmfm9" event={"ID":"56257f99-b26e-494a-ac18-d12ab8a71f65","Type":"ContainerDied","Data":"16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3"} Feb 19 09:37:57 crc kubenswrapper[4780]: I0219 09:37:57.045266 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e0f6b0bfa3ecb669660350e786d214d5213ef8b5a229ea88d94f710a8f03f3" Feb 19 09:37:57 crc kubenswrapper[4780]: I0219 09:37:57.045334 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmfm9" Feb 19 09:37:59 crc kubenswrapper[4780]: I0219 09:37:59.919290 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xmfm9"] Feb 19 09:37:59 crc kubenswrapper[4780]: I0219 09:37:59.932532 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xmfm9"] Feb 19 09:37:59 crc kubenswrapper[4780]: I0219 09:37:59.968193 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56257f99-b26e-494a-ac18-d12ab8a71f65" path="/var/lib/kubelet/pods/56257f99-b26e-494a-ac18-d12ab8a71f65/volumes" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.915520 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8lwtc"] Feb 19 09:38:04 crc kubenswrapper[4780]: E0219 09:38:04.916736 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56257f99-b26e-494a-ac18-d12ab8a71f65" containerName="mariadb-account-create-update" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.916765 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="56257f99-b26e-494a-ac18-d12ab8a71f65" containerName="mariadb-account-create-update" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.917081 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="56257f99-b26e-494a-ac18-d12ab8a71f65" containerName="mariadb-account-create-update" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.917982 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.926006 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8lwtc"] Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.972063 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.994755 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwpm4\" (UniqueName: \"kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:04 crc kubenswrapper[4780]: I0219 09:38:04.994880 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.096091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwpm4\" (UniqueName: \"kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.096481 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.097812 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.129931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwpm4\" (UniqueName: \"kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4\") pod \"root-account-create-update-8lwtc\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.295223 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:05 crc kubenswrapper[4780]: I0219 09:38:05.608921 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8lwtc"] Feb 19 09:38:06 crc kubenswrapper[4780]: I0219 09:38:06.131861 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8lwtc" event={"ID":"da4db9bd-1d71-4d42-a786-e5e7c0098d4c","Type":"ContainerStarted","Data":"0301fda3d2f4c00573561d469bed9f62602e61a25143214f8283d23a011ad855"} Feb 19 09:38:06 crc kubenswrapper[4780]: I0219 09:38:06.132172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8lwtc" event={"ID":"da4db9bd-1d71-4d42-a786-e5e7c0098d4c","Type":"ContainerStarted","Data":"d5b75b4bd6e6d4b7e0d3402a38d9674fa59d51d4ef33a67c9242e619d74a8573"} Feb 19 09:38:06 crc kubenswrapper[4780]: I0219 09:38:06.156000 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8lwtc" podStartSLOduration=2.155975163 podStartE2EDuration="2.155975163s" podCreationTimestamp="2026-02-19 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:38:06.151188806 +0000 UTC m=+4628.894846295" watchObservedRunningTime="2026-02-19 09:38:06.155975163 +0000 UTC m=+4628.899632622" Feb 19 09:38:07 crc kubenswrapper[4780]: I0219 09:38:07.143337 4780 generic.go:334] "Generic (PLEG): container finished" podID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerID="f593f0cc59ff9ef81ba9e451ad2b558cd27621e7d02e1e27f4a7e8df51e9293a" exitCode=0 Feb 19 09:38:07 crc kubenswrapper[4780]: I0219 09:38:07.143362 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerDied","Data":"f593f0cc59ff9ef81ba9e451ad2b558cd27621e7d02e1e27f4a7e8df51e9293a"} Feb 19 09:38:08 crc kubenswrapper[4780]: I0219 09:38:08.153850 4780 generic.go:334] "Generic (PLEG): container finished" podID="da4db9bd-1d71-4d42-a786-e5e7c0098d4c" containerID="0301fda3d2f4c00573561d469bed9f62602e61a25143214f8283d23a011ad855" exitCode=0 Feb 19 09:38:08 crc kubenswrapper[4780]: I0219 09:38:08.154039 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8lwtc" event={"ID":"da4db9bd-1d71-4d42-a786-e5e7c0098d4c","Type":"ContainerDied","Data":"0301fda3d2f4c00573561d469bed9f62602e61a25143214f8283d23a011ad855"} Feb 19 09:38:08 crc kubenswrapper[4780]: I0219 09:38:08.156824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerStarted","Data":"8061dc1535468fba7684cc462f916b9e596df35faa3390a458662fdcb4dcbb5e"} Feb 19 09:38:08 crc kubenswrapper[4780]: I0219 09:38:08.157893 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 09:38:08 crc kubenswrapper[4780]: I0219 09:38:08.200181 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.200155366 podStartE2EDuration="37.200155366s" podCreationTimestamp="2026-02-19 09:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:38:08.195551614 +0000 UTC m=+4630.939209093" watchObservedRunningTime="2026-02-19 09:38:08.200155366 +0000 UTC m=+4630.943812825" Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.166366 4780 generic.go:334] "Generic (PLEG): container finished" podID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerID="65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b" exitCode=0 Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.166449 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerDied","Data":"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b"} Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.438814 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.572067 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts\") pod \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.572139 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwpm4\" (UniqueName: \"kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4\") pod \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\" (UID: \"da4db9bd-1d71-4d42-a786-e5e7c0098d4c\") " Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.572828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da4db9bd-1d71-4d42-a786-e5e7c0098d4c" (UID: "da4db9bd-1d71-4d42-a786-e5e7c0098d4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.577848 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4" (OuterVolumeSpecName: "kube-api-access-rwpm4") pod "da4db9bd-1d71-4d42-a786-e5e7c0098d4c" (UID: "da4db9bd-1d71-4d42-a786-e5e7c0098d4c"). InnerVolumeSpecName "kube-api-access-rwpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.674597 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:09 crc kubenswrapper[4780]: I0219 09:38:09.675240 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwpm4\" (UniqueName: \"kubernetes.io/projected/da4db9bd-1d71-4d42-a786-e5e7c0098d4c-kube-api-access-rwpm4\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.174158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8lwtc" event={"ID":"da4db9bd-1d71-4d42-a786-e5e7c0098d4c","Type":"ContainerDied","Data":"d5b75b4bd6e6d4b7e0d3402a38d9674fa59d51d4ef33a67c9242e619d74a8573"} Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.174215 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b75b4bd6e6d4b7e0d3402a38d9674fa59d51d4ef33a67c9242e619d74a8573" Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.174230 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8lwtc" Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.177545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerStarted","Data":"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f"} Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.177801 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:10 crc kubenswrapper[4780]: I0219 09:38:10.257169 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.257152663 podStartE2EDuration="38.257152663s" podCreationTimestamp="2026-02-19 09:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:38:10.253716719 +0000 UTC m=+4632.997374188" watchObservedRunningTime="2026-02-19 09:38:10.257152663 +0000 UTC m=+4633.000810102" Feb 19 09:38:23 crc kubenswrapper[4780]: I0219 09:38:23.386476 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 09:38:23 crc kubenswrapper[4780]: I0219 09:38:23.612356 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.006724 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:38:30 crc kubenswrapper[4780]: E0219 09:38:30.008096 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4db9bd-1d71-4d42-a786-e5e7c0098d4c" containerName="mariadb-account-create-update" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.008120 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4db9bd-1d71-4d42-a786-e5e7c0098d4c" containerName="mariadb-account-create-update" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.008809 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4db9bd-1d71-4d42-a786-e5e7c0098d4c" containerName="mariadb-account-create-update" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.028348 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.028512 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.119077 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwwk\" (UniqueName: \"kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.119189 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.119499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.220912 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.221078 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwwk\" (UniqueName: \"kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.221172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.223005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.223793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.261470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwwk\" (UniqueName: \"kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk\") pod \"dnsmasq-dns-54dc9c94cc-hxbzv\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.355183 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.681892 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:30 crc kubenswrapper[4780]: I0219 09:38:30.950713 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:38:31 crc kubenswrapper[4780]: W0219 09:38:31.280937 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5071553_02b5_42e0_ab21_b865624efbb3.slice/crio-acf9bae6a1dd719bb2f678c7d29834d239f251ea7eb8cba56a06d7f84ca76b4c WatchSource:0}: Error finding container acf9bae6a1dd719bb2f678c7d29834d239f251ea7eb8cba56a06d7f84ca76b4c: Status 404 returned error can't find the container with id acf9bae6a1dd719bb2f678c7d29834d239f251ea7eb8cba56a06d7f84ca76b4c Feb 19 09:38:31 crc kubenswrapper[4780]: I0219 09:38:31.369323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" event={"ID":"a5071553-02b5-42e0-ab21-b865624efbb3","Type":"ContainerStarted","Data":"acf9bae6a1dd719bb2f678c7d29834d239f251ea7eb8cba56a06d7f84ca76b4c"} Feb 19 09:38:31 crc kubenswrapper[4780]: I0219 09:38:31.566739 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:32 crc kubenswrapper[4780]: I0219 09:38:32.378514 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5071553-02b5-42e0-ab21-b865624efbb3" containerID="2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5" exitCode=0 Feb 19 09:38:32 crc kubenswrapper[4780]: I0219 09:38:32.378578 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" event={"ID":"a5071553-02b5-42e0-ab21-b865624efbb3","Type":"ContainerDied","Data":"2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5"} Feb 19 09:38:32 crc kubenswrapper[4780]: I0219 09:38:32.903367 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="rabbitmq" containerID="cri-o://8061dc1535468fba7684cc462f916b9e596df35faa3390a458662fdcb4dcbb5e" gracePeriod=604798 Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.383672 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.238:5672: connect: connection refused" Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.393932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" event={"ID":"a5071553-02b5-42e0-ab21-b865624efbb3","Type":"ContainerStarted","Data":"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac"} Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.394501 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.422515 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" podStartSLOduration=4.422483777 podStartE2EDuration="4.422483777s" podCreationTimestamp="2026-02-19 09:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:38:33.417972696 +0000 UTC m=+4656.161630145" watchObservedRunningTime="2026-02-19 09:38:33.422483777 +0000 UTC m=+4656.166141266" Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.554582 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="rabbitmq" containerID="cri-o://277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f" gracePeriod=604799 Feb 19 09:38:33 crc kubenswrapper[4780]: I0219 09:38:33.611489 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5672: connect: connection refused" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.451348 4780 generic.go:334] "Generic (PLEG): container finished" podID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerID="8061dc1535468fba7684cc462f916b9e596df35faa3390a458662fdcb4dcbb5e" exitCode=0 Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.451471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerDied","Data":"8061dc1535468fba7684cc462f916b9e596df35faa3390a458662fdcb4dcbb5e"} Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.584436 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585580 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585678 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whhs8\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585791 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.585843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.586058 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.586095 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.586164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins\") pod \"f2f484f2-c2e8-48de-a624-2d083d40aae5\" (UID: \"f2f484f2-c2e8-48de-a624-2d083d40aae5\") " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587015 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587034 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587623 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587648 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.587667 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.596075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info" (OuterVolumeSpecName: "pod-info") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.596062 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.596229 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8" (OuterVolumeSpecName: "kube-api-access-whhs8") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "kube-api-access-whhs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.614331 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082" (OuterVolumeSpecName: "persistence") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "pvc-f7bebe32-563f-41a4-ae0b-867afe792082". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.633026 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf" (OuterVolumeSpecName: "server-conf") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.688755 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2f484f2-c2e8-48de-a624-2d083d40aae5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.688789 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2f484f2-c2e8-48de-a624-2d083d40aae5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.688822 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") on node \"crc\" " Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.688835 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2f484f2-c2e8-48de-a624-2d083d40aae5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.688846 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whhs8\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-kube-api-access-whhs8\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.701502 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f2f484f2-c2e8-48de-a624-2d083d40aae5" (UID: "f2f484f2-c2e8-48de-a624-2d083d40aae5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.743034 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.743341 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7bebe32-563f-41a4-ae0b-867afe792082" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082") on node "crc" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.790593 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2f484f2-c2e8-48de-a624-2d083d40aae5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:39 crc kubenswrapper[4780]: I0219 09:38:39.790943 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.071714 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195183 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195798 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7snzq\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195898 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.195981 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.196099 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.196183 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.196205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf\") pod \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\" (UID: \"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03\") " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.196527 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.196632 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.197474 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.197528 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.200663 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.200849 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info" (OuterVolumeSpecName: "pod-info") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.200848 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq" (OuterVolumeSpecName: "kube-api-access-7snzq") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "kube-api-access-7snzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.213625 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58" (OuterVolumeSpecName: "persistence") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "pvc-45cfd6fd-4eae-4933-9900-e99dca41db58". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.223593 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf" (OuterVolumeSpecName: "server-conf") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298715 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7snzq\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-kube-api-access-7snzq\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298749 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298780 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") on node \"crc\" " Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298801 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298812 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298821 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.298829 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.303014 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" (UID: "be1f9364-21cb-4a1d-9d8d-dbf4ad693f03"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.313751 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.313884 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-45cfd6fd-4eae-4933-9900-e99dca41db58" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58") on node "crc" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.356309 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.402452 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.402529 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.407186 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.407510 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="dnsmasq-dns" containerID="cri-o://7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6" gracePeriod=10 Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.465531 4780 generic.go:334] "Generic (PLEG): container finished" podID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerID="277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f" exitCode=0 Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.465663 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.466672 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerDied","Data":"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f"} Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.466715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"be1f9364-21cb-4a1d-9d8d-dbf4ad693f03","Type":"ContainerDied","Data":"1d57184d723a4b633392184c2660432b953231c09a4fd549e3d62f2018a045cc"} Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.466736 4780 scope.go:117] "RemoveContainer" containerID="277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.469935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f2f484f2-c2e8-48de-a624-2d083d40aae5","Type":"ContainerDied","Data":"9d2ead72422d77ee6ce3b11364a5e949ac0ad9db4f6d8ba047fa2c64302dfadf"} Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.470036 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.500551 4780 scope.go:117] "RemoveContainer" containerID="65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.512989 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.521403 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.529386 4780 scope.go:117] "RemoveContainer" containerID="277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.529711 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.529775 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f\": container with ID starting with 277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f not found: ID does not exist" containerID="277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.529851 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f"} err="failed to get container status \"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f\": rpc error: code = NotFound desc = could not find container \"277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f\": container with ID starting with 277391ed3f649e8ca4c7a16a809585164937d15acb92670d6040f655b9200e0f not found: ID does not exist" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.529872 4780 scope.go:117] "RemoveContainer" containerID="65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b" Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.530368 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b\": container with ID starting with 65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b not found: ID does not exist" containerID="65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.530426 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b"} err="failed to get container status \"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b\": rpc error: code = NotFound desc = could not find container \"65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b\": container with ID starting with 65e8e571e0785eaf66f3bc495d9c0d4e1ce81787ee581065e25d166a9e4a0e4b not found: ID does not exist" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.530468 4780 scope.go:117] "RemoveContainer" containerID="8061dc1535468fba7684cc462f916b9e596df35faa3390a458662fdcb4dcbb5e" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.535665 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.560366 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.560656 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="setup-container" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.560900 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="setup-container" Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.560920 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="setup-container" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.560926 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="setup-container" Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.560936 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.560943 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: E0219 09:38:40.560958 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.560964 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.561105 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.561115 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" containerName="rabbitmq" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.561897 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.564736 4780 scope.go:117] "RemoveContainer" containerID="f593f0cc59ff9ef81ba9e451ad2b558cd27621e7d02e1e27f4a7e8df51e9293a" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.564899 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.564965 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.564990 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nzkd5" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.565026 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.565109 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.574197 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.575439 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.577957 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.578101 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2x24p" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.578296 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.578446 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.584626 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.585233 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.587473 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.707938 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708013 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfz8\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-kube-api-access-btfz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708168 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708319 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kftf\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-kube-api-access-9kftf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708405 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708534 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708681 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708842 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.708884 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810178 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btfz8\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-kube-api-access-btfz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810271 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810293 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810350 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810383 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810412 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kftf\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-kube-api-access-9kftf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810461 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810482 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810538 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810560 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810592 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810611 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.810636 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.812492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.812653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.812797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.812968 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.813159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.813238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.815080 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.815103 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0aaac4a1bd488f11e190d1571579db07d43faecf57989f10e073ca6cd80540f6/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.816070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.816152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.816490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.816597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.816749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.817242 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.819399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.820717 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.825550 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.825604 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/859d86eeb9b02e5ed8939e930770c13f548056d433f31c6b511a11aad30e4f63/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.831193 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.832243 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kftf\" (UniqueName: \"kubernetes.io/projected/1e3cadc0-1b7f-43a8-b290-f0f76853af4a-kube-api-access-9kftf\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.834024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btfz8\" (UniqueName: \"kubernetes.io/projected/7b1ad9ff-a229-4a5d-ae0a-4df21033325a-kube-api-access-btfz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.852453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45cfd6fd-4eae-4933-9900-e99dca41db58\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b1ad9ff-a229-4a5d-ae0a-4df21033325a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.884501 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7bebe32-563f-41a4-ae0b-867afe792082\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7bebe32-563f-41a4-ae0b-867afe792082\") pod \"rabbitmq-server-0\" (UID: \"1e3cadc0-1b7f-43a8-b290-f0f76853af4a\") " pod="openstack/rabbitmq-server-0" Feb 19 09:38:40 crc kubenswrapper[4780]: I0219 09:38:40.900860 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.012765 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config\") pod \"34f992ca-4984-424e-9316-f12a913d2ac3\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.012876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmssj\" (UniqueName: \"kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj\") pod \"34f992ca-4984-424e-9316-f12a913d2ac3\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.012928 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc\") pod \"34f992ca-4984-424e-9316-f12a913d2ac3\" (UID: \"34f992ca-4984-424e-9316-f12a913d2ac3\") " Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.017475 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj" (OuterVolumeSpecName: "kube-api-access-qmssj") pod "34f992ca-4984-424e-9316-f12a913d2ac3" (UID: "34f992ca-4984-424e-9316-f12a913d2ac3"). InnerVolumeSpecName "kube-api-access-qmssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.055382 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config" (OuterVolumeSpecName: "config") pod "34f992ca-4984-424e-9316-f12a913d2ac3" (UID: "34f992ca-4984-424e-9316-f12a913d2ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.072009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34f992ca-4984-424e-9316-f12a913d2ac3" (UID: "34f992ca-4984-424e-9316-f12a913d2ac3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.114316 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.114340 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmssj\" (UniqueName: \"kubernetes.io/projected/34f992ca-4984-424e-9316-f12a913d2ac3-kube-api-access-qmssj\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.114350 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f992ca-4984-424e-9316-f12a913d2ac3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.181396 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.337696 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.411859 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 09:38:41 crc kubenswrapper[4780]: W0219 09:38:41.419771 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3cadc0_1b7f_43a8_b290_f0f76853af4a.slice/crio-f943e410f46706689bad31fed2cc887cdb1f7fdd752119c1184132bcc35c3fd2 WatchSource:0}: Error finding container f943e410f46706689bad31fed2cc887cdb1f7fdd752119c1184132bcc35c3fd2: Status 404 returned error can't find the container with id f943e410f46706689bad31fed2cc887cdb1f7fdd752119c1184132bcc35c3fd2 Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.476940 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b1ad9ff-a229-4a5d-ae0a-4df21033325a","Type":"ContainerStarted","Data":"d0c05c095525def00a26f661899ca7deb4ad26ef0eae89c3fe6b362759b2a0d8"} Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.479909 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e3cadc0-1b7f-43a8-b290-f0f76853af4a","Type":"ContainerStarted","Data":"f943e410f46706689bad31fed2cc887cdb1f7fdd752119c1184132bcc35c3fd2"} Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.482505 4780 generic.go:334] "Generic (PLEG): container finished" podID="34f992ca-4984-424e-9316-f12a913d2ac3" containerID="7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6" exitCode=0 Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.482577 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" event={"ID":"34f992ca-4984-424e-9316-f12a913d2ac3","Type":"ContainerDied","Data":"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6"} Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.482631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" event={"ID":"34f992ca-4984-424e-9316-f12a913d2ac3","Type":"ContainerDied","Data":"5950b63beec663cff745704c2ceadfd29194e47118db42374fc8e9c26e751325"} Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.482657 4780 scope.go:117] "RemoveContainer" containerID="7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.482828 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-mlkrr" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.506380 4780 scope.go:117] "RemoveContainer" containerID="96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.537552 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.539453 4780 scope.go:117] "RemoveContainer" containerID="7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6" Feb 19 09:38:41 crc kubenswrapper[4780]: E0219 09:38:41.540541 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6\": container with ID starting with 7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6 not found: ID does not exist" containerID="7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.540579 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6"} err="failed to get container status \"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6\": rpc error: code = NotFound desc = could not find container \"7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6\": container with ID starting with 7d1bf4286e975ac49a17a34cf7ecbf2ea02b4dc9bedb03bf91724ddc66dac7f6 not found: ID does not exist" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.540607 4780 scope.go:117] "RemoveContainer" containerID="96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590" Feb 19 09:38:41 crc kubenswrapper[4780]: E0219 09:38:41.541161 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590\": container with ID starting with 96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590 not found: ID does not exist" containerID="96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.541216 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590"} err="failed to get container status \"96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590\": rpc error: code = NotFound desc = could not find container \"96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590\": container with ID starting with 96d24d1220f4c36268b3dadc8569b9f476662c78dde37d006f76cff049550590 not found: ID does not exist" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.544473 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-mlkrr"] Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.955719 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" path="/var/lib/kubelet/pods/34f992ca-4984-424e-9316-f12a913d2ac3/volumes" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.957244 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1f9364-21cb-4a1d-9d8d-dbf4ad693f03" path="/var/lib/kubelet/pods/be1f9364-21cb-4a1d-9d8d-dbf4ad693f03/volumes" Feb 19 09:38:41 crc kubenswrapper[4780]: I0219 09:38:41.959688 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f484f2-c2e8-48de-a624-2d083d40aae5" path="/var/lib/kubelet/pods/f2f484f2-c2e8-48de-a624-2d083d40aae5/volumes" Feb 19 09:38:43 crc kubenswrapper[4780]: I0219 09:38:43.516409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b1ad9ff-a229-4a5d-ae0a-4df21033325a","Type":"ContainerStarted","Data":"3081fc68c531f5794c8e11c94efefc1cfdcbce81dc56b09773d0b943c4e9dabc"} Feb 19 09:38:43 crc kubenswrapper[4780]: I0219 09:38:43.519801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e3cadc0-1b7f-43a8-b290-f0f76853af4a","Type":"ContainerStarted","Data":"9ade59a89087b431f83f10bc4364373ec902f2c2c24a9a94a823f56f988adca1"} Feb 19 09:39:16 crc kubenswrapper[4780]: E0219 09:39:16.031142 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b1ad9ff_a229_4a5d_ae0a_4df21033325a.slice/crio-conmon-3081fc68c531f5794c8e11c94efefc1cfdcbce81dc56b09773d0b943c4e9dabc.scope\": RecentStats: unable to find data in memory cache]" Feb 19 09:39:16 crc kubenswrapper[4780]: I0219 09:39:16.873895 4780 generic.go:334] "Generic (PLEG): container finished" podID="7b1ad9ff-a229-4a5d-ae0a-4df21033325a" containerID="3081fc68c531f5794c8e11c94efefc1cfdcbce81dc56b09773d0b943c4e9dabc" exitCode=0 Feb 19 09:39:16 crc kubenswrapper[4780]: I0219 09:39:16.873960 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b1ad9ff-a229-4a5d-ae0a-4df21033325a","Type":"ContainerDied","Data":"3081fc68c531f5794c8e11c94efefc1cfdcbce81dc56b09773d0b943c4e9dabc"} Feb 19 09:39:17 crc kubenswrapper[4780]: I0219 09:39:17.887748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b1ad9ff-a229-4a5d-ae0a-4df21033325a","Type":"ContainerStarted","Data":"48c13bc7c9d865327489a16f1637bdafba19ec32f3e6ec069c4a1bc1158c8f52"} Feb 19 09:39:17 crc kubenswrapper[4780]: I0219 09:39:17.888437 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:39:17 crc kubenswrapper[4780]: I0219 09:39:17.891722 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e3cadc0-1b7f-43a8-b290-f0f76853af4a" containerID="9ade59a89087b431f83f10bc4364373ec902f2c2c24a9a94a823f56f988adca1" exitCode=0 Feb 19 09:39:17 crc kubenswrapper[4780]: I0219 09:39:17.891780 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e3cadc0-1b7f-43a8-b290-f0f76853af4a","Type":"ContainerDied","Data":"9ade59a89087b431f83f10bc4364373ec902f2c2c24a9a94a823f56f988adca1"} Feb 19 09:39:17 crc kubenswrapper[4780]: I0219 09:39:17.926388 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.926362543 podStartE2EDuration="37.926362543s" podCreationTimestamp="2026-02-19 09:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:39:17.917089756 +0000 UTC m=+4700.660747215" watchObservedRunningTime="2026-02-19 09:39:17.926362543 +0000 UTC m=+4700.670020002" Feb 19 09:39:18 crc kubenswrapper[4780]: I0219 09:39:18.901209 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e3cadc0-1b7f-43a8-b290-f0f76853af4a","Type":"ContainerStarted","Data":"da2770177ee6f33ce10fe310964a13cda25418900690aa2d18c44bdcd9f5b793"} Feb 19 09:39:18 crc kubenswrapper[4780]: I0219 09:39:18.901803 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 09:39:18 crc kubenswrapper[4780]: I0219 09:39:18.935402 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.935379101 podStartE2EDuration="38.935379101s" podCreationTimestamp="2026-02-19 09:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:39:18.929019655 +0000 UTC m=+4701.672677164" watchObservedRunningTime="2026-02-19 09:39:18.935379101 +0000 UTC m=+4701.679036560" Feb 19 09:39:30 crc kubenswrapper[4780]: I0219 09:39:30.904363 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 09:39:31 crc kubenswrapper[4780]: I0219 09:39:31.185767 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.661181 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:38 crc kubenswrapper[4780]: E0219 09:39:38.662062 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="init" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.662074 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="init" Feb 19 09:39:38 crc kubenswrapper[4780]: E0219 09:39:38.662098 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="dnsmasq-dns" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.662104 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="dnsmasq-dns" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.662246 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f992ca-4984-424e-9316-f12a913d2ac3" containerName="dnsmasq-dns" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.662725 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.665992 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vvhh7" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.681809 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.815795 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f554t\" (UniqueName: \"kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t\") pod \"mariadb-client\" (UID: \"902a23fb-4401-4fe2-bd83-c72f941db457\") " pod="openstack/mariadb-client" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.917729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f554t\" (UniqueName: \"kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t\") pod \"mariadb-client\" (UID: \"902a23fb-4401-4fe2-bd83-c72f941db457\") " pod="openstack/mariadb-client" Feb 19 09:39:38 crc kubenswrapper[4780]: I0219 09:39:38.951383 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f554t\" (UniqueName: \"kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t\") pod \"mariadb-client\" (UID: \"902a23fb-4401-4fe2-bd83-c72f941db457\") " pod="openstack/mariadb-client" Feb 19 09:39:39 crc kubenswrapper[4780]: I0219 09:39:39.009961 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:39:39 crc kubenswrapper[4780]: I0219 09:39:39.607555 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:39 crc kubenswrapper[4780]: W0219 09:39:39.615482 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod902a23fb_4401_4fe2_bd83_c72f941db457.slice/crio-5f0edb16f695c718e0c23b75ede3a03e3b1c9e4c9789455eee659e8ef3cf8485 WatchSource:0}: Error finding container 5f0edb16f695c718e0c23b75ede3a03e3b1c9e4c9789455eee659e8ef3cf8485: Status 404 returned error can't find the container with id 5f0edb16f695c718e0c23b75ede3a03e3b1c9e4c9789455eee659e8ef3cf8485 Feb 19 09:39:40 crc kubenswrapper[4780]: I0219 09:39:40.117780 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"902a23fb-4401-4fe2-bd83-c72f941db457","Type":"ContainerStarted","Data":"5f0edb16f695c718e0c23b75ede3a03e3b1c9e4c9789455eee659e8ef3cf8485"} Feb 19 09:39:41 crc kubenswrapper[4780]: I0219 09:39:41.128056 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"902a23fb-4401-4fe2-bd83-c72f941db457","Type":"ContainerStarted","Data":"dc6fab3e196fdf15c331fbfaa42547398282b99c4b483927b3ac99a723709522"} Feb 19 09:39:41 crc kubenswrapper[4780]: I0219 09:39:41.152077 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.552831377 podStartE2EDuration="3.152052442s" podCreationTimestamp="2026-02-19 09:39:38 +0000 UTC" firstStartedPulling="2026-02-19 09:39:39.620776599 +0000 UTC m=+4722.364434078" lastFinishedPulling="2026-02-19 09:39:40.219997684 +0000 UTC m=+4722.963655143" observedRunningTime="2026-02-19 09:39:41.146875236 +0000 UTC m=+4723.890532735" watchObservedRunningTime="2026-02-19 09:39:41.152052442 +0000 UTC m=+4723.895709931" Feb 19 09:39:42 crc kubenswrapper[4780]: E0219 09:39:42.666480 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:55284->38.102.83.103:37621: write tcp 38.102.83.103:55284->38.102.83.103:37621: write: broken pipe Feb 19 09:39:54 crc kubenswrapper[4780]: I0219 09:39:54.940919 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:54 crc kubenswrapper[4780]: I0219 09:39:54.941502 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="902a23fb-4401-4fe2-bd83-c72f941db457" containerName="mariadb-client" containerID="cri-o://dc6fab3e196fdf15c331fbfaa42547398282b99c4b483927b3ac99a723709522" gracePeriod=30 Feb 19 09:39:55 crc kubenswrapper[4780]: I0219 09:39:55.622002 4780 generic.go:334] "Generic (PLEG): container finished" podID="902a23fb-4401-4fe2-bd83-c72f941db457" containerID="dc6fab3e196fdf15c331fbfaa42547398282b99c4b483927b3ac99a723709522" exitCode=143 Feb 19 09:39:55 crc kubenswrapper[4780]: I0219 09:39:55.622103 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"902a23fb-4401-4fe2-bd83-c72f941db457","Type":"ContainerDied","Data":"dc6fab3e196fdf15c331fbfaa42547398282b99c4b483927b3ac99a723709522"} Feb 19 09:39:55 crc kubenswrapper[4780]: I0219 09:39:55.958279 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.094043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f554t\" (UniqueName: \"kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t\") pod \"902a23fb-4401-4fe2-bd83-c72f941db457\" (UID: \"902a23fb-4401-4fe2-bd83-c72f941db457\") " Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.102249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t" (OuterVolumeSpecName: "kube-api-access-f554t") pod "902a23fb-4401-4fe2-bd83-c72f941db457" (UID: "902a23fb-4401-4fe2-bd83-c72f941db457"). InnerVolumeSpecName "kube-api-access-f554t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.196274 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f554t\" (UniqueName: \"kubernetes.io/projected/902a23fb-4401-4fe2-bd83-c72f941db457-kube-api-access-f554t\") on node \"crc\" DevicePath \"\"" Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.632238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"902a23fb-4401-4fe2-bd83-c72f941db457","Type":"ContainerDied","Data":"5f0edb16f695c718e0c23b75ede3a03e3b1c9e4c9789455eee659e8ef3cf8485"} Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.632576 4780 scope.go:117] "RemoveContainer" containerID="dc6fab3e196fdf15c331fbfaa42547398282b99c4b483927b3ac99a723709522" Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.632389 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.676583 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:56 crc kubenswrapper[4780]: I0219 09:39:56.683836 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:39:57 crc kubenswrapper[4780]: I0219 09:39:57.954745 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902a23fb-4401-4fe2-bd83-c72f941db457" path="/var/lib/kubelet/pods/902a23fb-4401-4fe2-bd83-c72f941db457/volumes" Feb 19 09:40:06 crc kubenswrapper[4780]: I0219 09:40:06.335841 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:40:06 crc kubenswrapper[4780]: I0219 09:40:06.336341 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:40:36 crc kubenswrapper[4780]: I0219 09:40:36.336473 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:40:36 crc kubenswrapper[4780]: I0219 09:40:36.336900 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.325062 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:40:49 crc kubenswrapper[4780]: E0219 09:40:49.325895 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902a23fb-4401-4fe2-bd83-c72f941db457" containerName="mariadb-client" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.325911 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="902a23fb-4401-4fe2-bd83-c72f941db457" containerName="mariadb-client" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.326108 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="902a23fb-4401-4fe2-bd83-c72f941db457" containerName="mariadb-client" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.327412 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.403041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.421762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpwc\" (UniqueName: \"kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.421866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.421980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.524105 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gpwc\" (UniqueName: \"kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.524241 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.524378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.524904 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.525041 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.555812 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gpwc\" (UniqueName: \"kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc\") pod \"redhat-marketplace-g6tvv\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:49 crc kubenswrapper[4780]: I0219 09:40:49.662119 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:50 crc kubenswrapper[4780]: I0219 09:40:50.102956 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:40:50 crc kubenswrapper[4780]: I0219 09:40:50.219450 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerStarted","Data":"4268dd9841843ed690adb934e8f12bb17370c2b6cd615b43381ecf770ed9432f"} Feb 19 09:40:51 crc kubenswrapper[4780]: I0219 09:40:51.240083 4780 generic.go:334] "Generic (PLEG): container finished" podID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerID="43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478" exitCode=0 Feb 19 09:40:51 crc kubenswrapper[4780]: I0219 09:40:51.240151 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerDied","Data":"43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478"} Feb 19 09:40:52 crc kubenswrapper[4780]: I0219 09:40:52.250359 4780 generic.go:334] "Generic (PLEG): container finished" podID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerID="801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3" exitCode=0 Feb 19 09:40:52 crc kubenswrapper[4780]: I0219 09:40:52.250440 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerDied","Data":"801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3"} Feb 19 09:40:53 crc kubenswrapper[4780]: I0219 09:40:53.262353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerStarted","Data":"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09"} Feb 19 09:40:53 crc kubenswrapper[4780]: I0219 09:40:53.288830 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6tvv" podStartSLOduration=2.883737967 podStartE2EDuration="4.28880584s" podCreationTimestamp="2026-02-19 09:40:49 +0000 UTC" firstStartedPulling="2026-02-19 09:40:51.245208632 +0000 UTC m=+4793.988866091" lastFinishedPulling="2026-02-19 09:40:52.650276515 +0000 UTC m=+4795.393933964" observedRunningTime="2026-02-19 09:40:53.282721902 +0000 UTC m=+4796.026379351" watchObservedRunningTime="2026-02-19 09:40:53.28880584 +0000 UTC m=+4796.032463299" Feb 19 09:40:59 crc kubenswrapper[4780]: I0219 09:40:59.662829 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:59 crc kubenswrapper[4780]: I0219 09:40:59.663248 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:40:59 crc kubenswrapper[4780]: I0219 09:40:59.738489 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:41:00 crc kubenswrapper[4780]: I0219 09:41:00.387261 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:41:00 crc kubenswrapper[4780]: I0219 09:41:00.459807 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.338988 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6tvv" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="registry-server" containerID="cri-o://80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09" gracePeriod=2 Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.877513 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.936595 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpwc\" (UniqueName: \"kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc\") pod \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.936663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities\") pod \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.936706 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content\") pod \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\" (UID: \"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc\") " Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.937666 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities" (OuterVolumeSpecName: "utilities") pod "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" (UID: "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.943012 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc" (OuterVolumeSpecName: "kube-api-access-8gpwc") pod "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" (UID: "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc"). InnerVolumeSpecName "kube-api-access-8gpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:41:02 crc kubenswrapper[4780]: I0219 09:41:02.975310 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" (UID: "512cfa75-112b-4906-a2cf-8ec7d7b1e0fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.038015 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gpwc\" (UniqueName: \"kubernetes.io/projected/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-kube-api-access-8gpwc\") on node \"crc\" DevicePath \"\"" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.038048 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.038059 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.349040 4780 generic.go:334] "Generic (PLEG): container finished" podID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerID="80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09" exitCode=0 Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.349089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerDied","Data":"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09"} Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.349187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6tvv" event={"ID":"512cfa75-112b-4906-a2cf-8ec7d7b1e0fc","Type":"ContainerDied","Data":"4268dd9841843ed690adb934e8f12bb17370c2b6cd615b43381ecf770ed9432f"} Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.349199 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6tvv" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.349246 4780 scope.go:117] "RemoveContainer" containerID="80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.371775 4780 scope.go:117] "RemoveContainer" containerID="801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.397870 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.401559 4780 scope.go:117] "RemoveContainer" containerID="43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.407626 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6tvv"] Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.425328 4780 scope.go:117] "RemoveContainer" containerID="80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09" Feb 19 09:41:03 crc kubenswrapper[4780]: E0219 09:41:03.426832 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09\": container with ID starting with 80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09 not found: ID does not exist" containerID="80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.426916 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09"} err="failed to get container status \"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09\": rpc error: code = NotFound desc = could not find container \"80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09\": container with ID starting with 80b202f2bddce5d5f2c8792adf473ab1d2aa3af55692a96a15abb8dc213fca09 not found: ID does not exist" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.426957 4780 scope.go:117] "RemoveContainer" containerID="801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3" Feb 19 09:41:03 crc kubenswrapper[4780]: E0219 09:41:03.427558 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3\": container with ID starting with 801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3 not found: ID does not exist" containerID="801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.427602 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3"} err="failed to get container status \"801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3\": rpc error: code = NotFound desc = could not find container \"801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3\": container with ID starting with 801f71c668846b2a99083482534a44bfa539c4f1f2de3c8d6c122df4da72fae3 not found: ID does not exist" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.427629 4780 scope.go:117] "RemoveContainer" containerID="43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478" Feb 19 09:41:03 crc kubenswrapper[4780]: E0219 09:41:03.428047 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478\": container with ID starting with 43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478 not found: ID does not exist" containerID="43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.428116 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478"} err="failed to get container status \"43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478\": rpc error: code = NotFound desc = could not find container \"43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478\": container with ID starting with 43c9e9f06e6125b3efa419b5768bb64a741337d6d79383dc7bfa8a8d3abdf478 not found: ID does not exist" Feb 19 09:41:03 crc kubenswrapper[4780]: I0219 09:41:03.953731 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" path="/var/lib/kubelet/pods/512cfa75-112b-4906-a2cf-8ec7d7b1e0fc/volumes" Feb 19 09:41:04 crc kubenswrapper[4780]: I0219 09:41:04.826170 4780 scope.go:117] "RemoveContainer" containerID="86e5990bfc56ecb7d46e479729ec9707c5e698d2ab8dffba501a34d6f4c3b8d0" Feb 19 09:41:06 crc kubenswrapper[4780]: I0219 09:41:06.335972 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:41:06 crc kubenswrapper[4780]: I0219 09:41:06.336093 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:41:06 crc kubenswrapper[4780]: I0219 09:41:06.336217 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:41:06 crc kubenswrapper[4780]: I0219 09:41:06.337179 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:41:06 crc kubenswrapper[4780]: I0219 09:41:06.337298 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2" gracePeriod=600 Feb 19 09:41:07 crc kubenswrapper[4780]: I0219 09:41:07.391183 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2" exitCode=0 Feb 19 09:41:07 crc kubenswrapper[4780]: I0219 09:41:07.391315 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2"} Feb 19 09:41:07 crc kubenswrapper[4780]: I0219 09:41:07.391768 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6"} Feb 19 09:41:07 crc kubenswrapper[4780]: I0219 09:41:07.391795 4780 scope.go:117] "RemoveContainer" containerID="046253bd42f8953e644c3e5fbc2bb3ff44f03d2e9d483c7af0714536062630ec" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.269196 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:07 crc kubenswrapper[4780]: E0219 09:42:07.289656 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="extract-content" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.290513 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="extract-content" Feb 19 09:42:07 crc kubenswrapper[4780]: E0219 09:42:07.290536 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="registry-server" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.290543 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="registry-server" Feb 19 09:42:07 crc kubenswrapper[4780]: E0219 09:42:07.290557 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="extract-utilities" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.290564 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="extract-utilities" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.291020 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="512cfa75-112b-4906-a2cf-8ec7d7b1e0fc" containerName="registry-server" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.294346 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.298123 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.416830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qjmk\" (UniqueName: \"kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.417170 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.417201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.518658 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.518710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.518807 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qjmk\" (UniqueName: \"kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.519287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.519321 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.550044 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qjmk\" (UniqueName: \"kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk\") pod \"certified-operators-t7hc2\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.619492 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:07 crc kubenswrapper[4780]: I0219 09:42:07.977046 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:08 crc kubenswrapper[4780]: I0219 09:42:08.950220 4780 generic.go:334] "Generic (PLEG): container finished" podID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerID="22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52" exitCode=0 Feb 19 09:42:08 crc kubenswrapper[4780]: I0219 09:42:08.950311 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerDied","Data":"22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52"} Feb 19 09:42:08 crc kubenswrapper[4780]: I0219 09:42:08.950552 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerStarted","Data":"9c7d77201f2ca90449e1ccecd59c2135c141280de3e7074b5a554fc2927f94d4"} Feb 19 09:42:08 crc kubenswrapper[4780]: I0219 09:42:08.953719 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:42:09 crc kubenswrapper[4780]: I0219 09:42:09.968253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerStarted","Data":"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1"} Feb 19 09:42:10 crc kubenswrapper[4780]: I0219 09:42:10.982855 4780 generic.go:334] "Generic (PLEG): container finished" podID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerID="bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1" exitCode=0 Feb 19 09:42:10 crc kubenswrapper[4780]: I0219 09:42:10.982959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerDied","Data":"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1"} Feb 19 09:42:11 crc kubenswrapper[4780]: I0219 09:42:11.994977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerStarted","Data":"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3"} Feb 19 09:42:12 crc kubenswrapper[4780]: I0219 09:42:12.021813 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7hc2" podStartSLOduration=2.588378341 podStartE2EDuration="5.021780614s" podCreationTimestamp="2026-02-19 09:42:07 +0000 UTC" firstStartedPulling="2026-02-19 09:42:08.953303541 +0000 UTC m=+4871.696960990" lastFinishedPulling="2026-02-19 09:42:11.386705774 +0000 UTC m=+4874.130363263" observedRunningTime="2026-02-19 09:42:12.018056333 +0000 UTC m=+4874.761713822" watchObservedRunningTime="2026-02-19 09:42:12.021780614 +0000 UTC m=+4874.765438113" Feb 19 09:42:17 crc kubenswrapper[4780]: I0219 09:42:17.619992 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:17 crc kubenswrapper[4780]: I0219 09:42:17.620726 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:17 crc kubenswrapper[4780]: I0219 09:42:17.665513 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:18 crc kubenswrapper[4780]: I0219 09:42:18.721609 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:18 crc kubenswrapper[4780]: I0219 09:42:18.800282 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.063459 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t7hc2" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="registry-server" containerID="cri-o://33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3" gracePeriod=2 Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.504207 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.530968 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content\") pod \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.531103 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities\") pod \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.531197 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qjmk\" (UniqueName: \"kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk\") pod \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\" (UID: \"482cd1b9-59a8-40a2-8a74-53e2fd7351d3\") " Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.533697 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities" (OuterVolumeSpecName: "utilities") pod "482cd1b9-59a8-40a2-8a74-53e2fd7351d3" (UID: "482cd1b9-59a8-40a2-8a74-53e2fd7351d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.539769 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk" (OuterVolumeSpecName: "kube-api-access-9qjmk") pod "482cd1b9-59a8-40a2-8a74-53e2fd7351d3" (UID: "482cd1b9-59a8-40a2-8a74-53e2fd7351d3"). InnerVolumeSpecName "kube-api-access-9qjmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.632873 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:20 crc kubenswrapper[4780]: I0219 09:42:20.632926 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qjmk\" (UniqueName: \"kubernetes.io/projected/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-kube-api-access-9qjmk\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.077674 4780 generic.go:334] "Generic (PLEG): container finished" podID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerID="33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3" exitCode=0 Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.077755 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerDied","Data":"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3"} Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.077806 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hc2" event={"ID":"482cd1b9-59a8-40a2-8a74-53e2fd7351d3","Type":"ContainerDied","Data":"9c7d77201f2ca90449e1ccecd59c2135c141280de3e7074b5a554fc2927f94d4"} Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.077957 4780 scope.go:117] "RemoveContainer" containerID="33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.078242 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hc2" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.106156 4780 scope.go:117] "RemoveContainer" containerID="bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.135462 4780 scope.go:117] "RemoveContainer" containerID="22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.165916 4780 scope.go:117] "RemoveContainer" containerID="33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3" Feb 19 09:42:21 crc kubenswrapper[4780]: E0219 09:42:21.166828 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3\": container with ID starting with 33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3 not found: ID does not exist" containerID="33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.166909 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3"} err="failed to get container status \"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3\": rpc error: code = NotFound desc = could not find container \"33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3\": container with ID starting with 33a737d2f811fcf1c60f5daeed2ddb19b57b74877478cb19d7cee64b2fb589c3 not found: ID does not exist" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.166950 4780 scope.go:117] "RemoveContainer" containerID="bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1" Feb 19 09:42:21 crc kubenswrapper[4780]: E0219 09:42:21.167674 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1\": container with ID starting with bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1 not found: ID does not exist" containerID="bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.167749 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1"} err="failed to get container status \"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1\": rpc error: code = NotFound desc = could not find container \"bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1\": container with ID starting with bac82b7a1a39542e3b59b111ad21131919e898b61a98668329b5d7d894a3ddc1 not found: ID does not exist" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.167810 4780 scope.go:117] "RemoveContainer" containerID="22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52" Feb 19 09:42:21 crc kubenswrapper[4780]: E0219 09:42:21.168425 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52\": container with ID starting with 22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52 not found: ID does not exist" containerID="22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.168472 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52"} err="failed to get container status \"22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52\": rpc error: code = NotFound desc = could not find container \"22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52\": container with ID starting with 22690ee13c96ca4493bd6a38d1926130852f4480a772abd88936d5fa5f34fa52 not found: ID does not exist" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.243566 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "482cd1b9-59a8-40a2-8a74-53e2fd7351d3" (UID: "482cd1b9-59a8-40a2-8a74-53e2fd7351d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.247114 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482cd1b9-59a8-40a2-8a74-53e2fd7351d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.426322 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.431788 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t7hc2"] Feb 19 09:42:21 crc kubenswrapper[4780]: I0219 09:42:21.950037 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" path="/var/lib/kubelet/pods/482cd1b9-59a8-40a2-8a74-53e2fd7351d3/volumes" Feb 19 09:43:06 crc kubenswrapper[4780]: I0219 09:43:06.336606 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:43:06 crc kubenswrapper[4780]: I0219 09:43:06.338531 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:43:36 crc kubenswrapper[4780]: I0219 09:43:36.336180 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:43:36 crc kubenswrapper[4780]: I0219 09:43:36.338294 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.694950 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:43:50 crc kubenswrapper[4780]: E0219 09:43:50.696345 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="extract-content" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.696379 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="extract-content" Feb 19 09:43:50 crc kubenswrapper[4780]: E0219 09:43:50.696406 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="registry-server" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.696426 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="registry-server" Feb 19 09:43:50 crc kubenswrapper[4780]: E0219 09:43:50.696479 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="extract-utilities" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.696497 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="extract-utilities" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.696863 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="482cd1b9-59a8-40a2-8a74-53e2fd7351d3" containerName="registry-server" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.698112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.715370 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.808605 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.808701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drskr\" (UniqueName: \"kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.808950 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.910263 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.911154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.911377 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.911718 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.911808 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drskr\" (UniqueName: \"kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:50 crc kubenswrapper[4780]: I0219 09:43:50.939409 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drskr\" (UniqueName: \"kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr\") pod \"redhat-operators-b9v58\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:51 crc kubenswrapper[4780]: I0219 09:43:51.029332 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:43:51 crc kubenswrapper[4780]: I0219 09:43:51.258318 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:43:51 crc kubenswrapper[4780]: I0219 09:43:51.872234 4780 generic.go:334] "Generic (PLEG): container finished" podID="75c4ac64-8285-4e6d-ae37-69400920f836" containerID="04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c" exitCode=0 Feb 19 09:43:51 crc kubenswrapper[4780]: I0219 09:43:51.872317 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerDied","Data":"04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c"} Feb 19 09:43:51 crc kubenswrapper[4780]: I0219 09:43:51.872357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerStarted","Data":"0327212d7f2ccb7c827cdc750159c842d0d1e25dcfcfdb8a94cbb7b5f1c5bb4f"} Feb 19 09:43:53 crc kubenswrapper[4780]: I0219 09:43:53.893281 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerStarted","Data":"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859"} Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.043474 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.044606 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.050070 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vvhh7" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.061346 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.168539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.168622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzj27\" (UniqueName: \"kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.270716 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.270843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzj27\" (UniqueName: \"kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.276229 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.276303 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9a99c312ff9018f8ca1bc17cfa012e7d8a30acb5a2e650aa224cdc98d732a94b/globalmount\"" pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.304884 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzj27\" (UniqueName: \"kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.325200 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") pod \"mariadb-copy-data\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.382578 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.902667 4780 generic.go:334] "Generic (PLEG): container finished" podID="75c4ac64-8285-4e6d-ae37-69400920f836" containerID="35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859" exitCode=0 Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.902737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerDied","Data":"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859"} Feb 19 09:43:54 crc kubenswrapper[4780]: W0219 09:43:54.942675 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07504bcf_ae99_4ecb_ab72_58864a7b4830.slice/crio-aa8174d48533ce8efcd074f11c460b92b6f1eb5735ee579d987e20a5acfc47d2 WatchSource:0}: Error finding container aa8174d48533ce8efcd074f11c460b92b6f1eb5735ee579d987e20a5acfc47d2: Status 404 returned error can't find the container with id aa8174d48533ce8efcd074f11c460b92b6f1eb5735ee579d987e20a5acfc47d2 Feb 19 09:43:54 crc kubenswrapper[4780]: I0219 09:43:54.944994 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 09:43:55 crc kubenswrapper[4780]: I0219 09:43:55.914430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerStarted","Data":"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af"} Feb 19 09:43:55 crc kubenswrapper[4780]: I0219 09:43:55.915707 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"07504bcf-ae99-4ecb-ab72-58864a7b4830","Type":"ContainerStarted","Data":"e62e1ecd1b1c63142808065018537506c1fe3f300d14c99681ac61d09787506d"} Feb 19 09:43:55 crc kubenswrapper[4780]: I0219 09:43:55.915730 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"07504bcf-ae99-4ecb-ab72-58864a7b4830","Type":"ContainerStarted","Data":"aa8174d48533ce8efcd074f11c460b92b6f1eb5735ee579d987e20a5acfc47d2"} Feb 19 09:43:55 crc kubenswrapper[4780]: I0219 09:43:55.957184 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9v58" podStartSLOduration=2.463905965 podStartE2EDuration="5.957122655s" podCreationTimestamp="2026-02-19 09:43:50 +0000 UTC" firstStartedPulling="2026-02-19 09:43:51.874470548 +0000 UTC m=+4974.618127997" lastFinishedPulling="2026-02-19 09:43:55.367687218 +0000 UTC m=+4978.111344687" observedRunningTime="2026-02-19 09:43:55.944364624 +0000 UTC m=+4978.688022093" watchObservedRunningTime="2026-02-19 09:43:55.957122655 +0000 UTC m=+4978.700780134" Feb 19 09:43:55 crc kubenswrapper[4780]: I0219 09:43:55.973480 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.973454894 podStartE2EDuration="2.973454894s" podCreationTimestamp="2026-02-19 09:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:43:55.969924588 +0000 UTC m=+4978.713582077" watchObservedRunningTime="2026-02-19 09:43:55.973454894 +0000 UTC m=+4978.717112373" Feb 19 09:43:58 crc kubenswrapper[4780]: I0219 09:43:58.869317 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 09:43:58 crc kubenswrapper[4780]: I0219 09:43:58.872597 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:43:58 crc kubenswrapper[4780]: I0219 09:43:58.879724 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:43:58 crc kubenswrapper[4780]: I0219 09:43:58.958894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56jp\" (UniqueName: \"kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp\") pod \"mariadb-client\" (UID: \"b74d706d-297e-4b7e-a64f-20dee29e43ce\") " pod="openstack/mariadb-client" Feb 19 09:43:59 crc kubenswrapper[4780]: I0219 09:43:59.061674 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56jp\" (UniqueName: \"kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp\") pod \"mariadb-client\" (UID: \"b74d706d-297e-4b7e-a64f-20dee29e43ce\") " pod="openstack/mariadb-client" Feb 19 09:43:59 crc kubenswrapper[4780]: I0219 09:43:59.103088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56jp\" (UniqueName: \"kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp\") pod \"mariadb-client\" (UID: \"b74d706d-297e-4b7e-a64f-20dee29e43ce\") " pod="openstack/mariadb-client" Feb 19 09:43:59 crc kubenswrapper[4780]: I0219 09:43:59.194026 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:43:59 crc kubenswrapper[4780]: I0219 09:43:59.716088 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:43:59 crc kubenswrapper[4780]: W0219 09:43:59.727434 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb74d706d_297e_4b7e_a64f_20dee29e43ce.slice/crio-90a2166e11e2d2df49aa0a342897396db4a58dfe18d253a94b52a147558d1daf WatchSource:0}: Error finding container 90a2166e11e2d2df49aa0a342897396db4a58dfe18d253a94b52a147558d1daf: Status 404 returned error can't find the container with id 90a2166e11e2d2df49aa0a342897396db4a58dfe18d253a94b52a147558d1daf Feb 19 09:43:59 crc kubenswrapper[4780]: I0219 09:43:59.955080 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b74d706d-297e-4b7e-a64f-20dee29e43ce","Type":"ContainerStarted","Data":"90a2166e11e2d2df49aa0a342897396db4a58dfe18d253a94b52a147558d1daf"} Feb 19 09:44:00 crc kubenswrapper[4780]: I0219 09:44:00.971280 4780 generic.go:334] "Generic (PLEG): container finished" podID="b74d706d-297e-4b7e-a64f-20dee29e43ce" containerID="68074719f44ced2f337ff2e6d04540359a34b95c5948dfdb11717b01bc28de6e" exitCode=0 Feb 19 09:44:00 crc kubenswrapper[4780]: I0219 09:44:00.971393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b74d706d-297e-4b7e-a64f-20dee29e43ce","Type":"ContainerDied","Data":"68074719f44ced2f337ff2e6d04540359a34b95c5948dfdb11717b01bc28de6e"} Feb 19 09:44:01 crc kubenswrapper[4780]: I0219 09:44:01.030236 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:01 crc kubenswrapper[4780]: I0219 09:44:01.030373 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.273694 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.299227 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b74d706d-297e-4b7e-a64f-20dee29e43ce/mariadb-client/0.log" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.345383 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.352932 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.411074 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9v58" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="registry-server" probeResult="failure" output=< Feb 19 09:44:02 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 09:44:02 crc kubenswrapper[4780]: > Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.417217 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56jp\" (UniqueName: \"kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp\") pod \"b74d706d-297e-4b7e-a64f-20dee29e43ce\" (UID: \"b74d706d-297e-4b7e-a64f-20dee29e43ce\") " Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.431745 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp" (OuterVolumeSpecName: "kube-api-access-b56jp") pod "b74d706d-297e-4b7e-a64f-20dee29e43ce" (UID: "b74d706d-297e-4b7e-a64f-20dee29e43ce"). InnerVolumeSpecName "kube-api-access-b56jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.519844 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56jp\" (UniqueName: \"kubernetes.io/projected/b74d706d-297e-4b7e-a64f-20dee29e43ce-kube-api-access-b56jp\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.554464 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:02 crc kubenswrapper[4780]: E0219 09:44:02.554965 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74d706d-297e-4b7e-a64f-20dee29e43ce" containerName="mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.554988 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74d706d-297e-4b7e-a64f-20dee29e43ce" containerName="mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.555348 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74d706d-297e-4b7e-a64f-20dee29e43ce" containerName="mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.556233 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.563042 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.723173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxt74\" (UniqueName: \"kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74\") pod \"mariadb-client\" (UID: \"161f87e9-9bb6-4cf4-85ad-44d24ce37586\") " pod="openstack/mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.825150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxt74\" (UniqueName: \"kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74\") pod \"mariadb-client\" (UID: \"161f87e9-9bb6-4cf4-85ad-44d24ce37586\") " pod="openstack/mariadb-client" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.992666 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a2166e11e2d2df49aa0a342897396db4a58dfe18d253a94b52a147558d1daf" Feb 19 09:44:02 crc kubenswrapper[4780]: I0219 09:44:02.992745 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:03 crc kubenswrapper[4780]: I0219 09:44:03.019246 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="b74d706d-297e-4b7e-a64f-20dee29e43ce" podUID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" Feb 19 09:44:03 crc kubenswrapper[4780]: I0219 09:44:03.070219 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxt74\" (UniqueName: \"kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74\") pod \"mariadb-client\" (UID: \"161f87e9-9bb6-4cf4-85ad-44d24ce37586\") " pod="openstack/mariadb-client" Feb 19 09:44:03 crc kubenswrapper[4780]: I0219 09:44:03.175838 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:03 crc kubenswrapper[4780]: I0219 09:44:03.743396 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:03 crc kubenswrapper[4780]: W0219 09:44:03.749457 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161f87e9_9bb6_4cf4_85ad_44d24ce37586.slice/crio-f988cb983033027eb2bf356a96fb1b3fb5ecd122cfdd1b5bfd04590d8f87b24d WatchSource:0}: Error finding container f988cb983033027eb2bf356a96fb1b3fb5ecd122cfdd1b5bfd04590d8f87b24d: Status 404 returned error can't find the container with id f988cb983033027eb2bf356a96fb1b3fb5ecd122cfdd1b5bfd04590d8f87b24d Feb 19 09:44:03 crc kubenswrapper[4780]: I0219 09:44:03.947215 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74d706d-297e-4b7e-a64f-20dee29e43ce" path="/var/lib/kubelet/pods/b74d706d-297e-4b7e-a64f-20dee29e43ce/volumes" Feb 19 09:44:04 crc kubenswrapper[4780]: I0219 09:44:04.002744 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"161f87e9-9bb6-4cf4-85ad-44d24ce37586","Type":"ContainerStarted","Data":"a85a65c10871124fdb8b32374f1a828a0fa86496f3056a71ed0dfc01bd88c037"} Feb 19 09:44:04 crc kubenswrapper[4780]: I0219 09:44:04.002791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"161f87e9-9bb6-4cf4-85ad-44d24ce37586","Type":"ContainerStarted","Data":"f988cb983033027eb2bf356a96fb1b3fb5ecd122cfdd1b5bfd04590d8f87b24d"} Feb 19 09:44:04 crc kubenswrapper[4780]: I0219 09:44:04.025044 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.025021983 podStartE2EDuration="2.025021983s" podCreationTimestamp="2026-02-19 09:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:04.023327432 +0000 UTC m=+4986.766984911" watchObservedRunningTime="2026-02-19 09:44:04.025021983 +0000 UTC m=+4986.768679452" Feb 19 09:44:04 crc kubenswrapper[4780]: I0219 09:44:04.593275 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_161f87e9-9bb6-4cf4-85ad-44d24ce37586/mariadb-client/0.log" Feb 19 09:44:04 crc kubenswrapper[4780]: I0219 09:44:04.964010 4780 scope.go:117] "RemoveContainer" containerID="4b11a85e2080d14ab45111bfc7fc9f408495ed569e77bd423273fe03fe53d8ce" Feb 19 09:44:05 crc kubenswrapper[4780]: I0219 09:44:05.015803 4780 generic.go:334] "Generic (PLEG): container finished" podID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" containerID="a85a65c10871124fdb8b32374f1a828a0fa86496f3056a71ed0dfc01bd88c037" exitCode=0 Feb 19 09:44:05 crc kubenswrapper[4780]: I0219 09:44:05.015851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"161f87e9-9bb6-4cf4-85ad-44d24ce37586","Type":"ContainerDied","Data":"a85a65c10871124fdb8b32374f1a828a0fa86496f3056a71ed0dfc01bd88c037"} Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.313651 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.336240 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.336303 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.336351 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.337083 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.337225 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" gracePeriod=600 Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.353966 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.360529 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 09:44:06 crc kubenswrapper[4780]: E0219 09:44:06.482756 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.494783 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxt74\" (UniqueName: \"kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74\") pod \"161f87e9-9bb6-4cf4-85ad-44d24ce37586\" (UID: \"161f87e9-9bb6-4cf4-85ad-44d24ce37586\") " Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.503085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74" (OuterVolumeSpecName: "kube-api-access-hxt74") pod "161f87e9-9bb6-4cf4-85ad-44d24ce37586" (UID: "161f87e9-9bb6-4cf4-85ad-44d24ce37586"). InnerVolumeSpecName "kube-api-access-hxt74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:44:06 crc kubenswrapper[4780]: I0219 09:44:06.597274 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxt74\" (UniqueName: \"kubernetes.io/projected/161f87e9-9bb6-4cf4-85ad-44d24ce37586-kube-api-access-hxt74\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.045177 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" exitCode=0 Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.045307 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6"} Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.045357 4780 scope.go:117] "RemoveContainer" containerID="00fe1350b9ecc9c8344991cd15f9b45eddd9cc38c31950ea39694cb017dfd3a2" Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.046619 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:44:07 crc kubenswrapper[4780]: E0219 09:44:07.047321 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.049096 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f988cb983033027eb2bf356a96fb1b3fb5ecd122cfdd1b5bfd04590d8f87b24d" Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.049220 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 09:44:07 crc kubenswrapper[4780]: I0219 09:44:07.955522 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" path="/var/lib/kubelet/pods/161f87e9-9bb6-4cf4-85ad-44d24ce37586/volumes" Feb 19 09:44:11 crc kubenswrapper[4780]: I0219 09:44:11.105775 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:11 crc kubenswrapper[4780]: I0219 09:44:11.181465 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:11 crc kubenswrapper[4780]: I0219 09:44:11.358004 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.103185 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b9v58" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="registry-server" containerID="cri-o://bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af" gracePeriod=2 Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.529904 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.610484 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content\") pod \"75c4ac64-8285-4e6d-ae37-69400920f836\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.610675 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities\") pod \"75c4ac64-8285-4e6d-ae37-69400920f836\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.611976 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities" (OuterVolumeSpecName: "utilities") pod "75c4ac64-8285-4e6d-ae37-69400920f836" (UID: "75c4ac64-8285-4e6d-ae37-69400920f836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.612180 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drskr\" (UniqueName: \"kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr\") pod \"75c4ac64-8285-4e6d-ae37-69400920f836\" (UID: \"75c4ac64-8285-4e6d-ae37-69400920f836\") " Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.613704 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.618491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr" (OuterVolumeSpecName: "kube-api-access-drskr") pod "75c4ac64-8285-4e6d-ae37-69400920f836" (UID: "75c4ac64-8285-4e6d-ae37-69400920f836"). InnerVolumeSpecName "kube-api-access-drskr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.715449 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drskr\" (UniqueName: \"kubernetes.io/projected/75c4ac64-8285-4e6d-ae37-69400920f836-kube-api-access-drskr\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.744199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75c4ac64-8285-4e6d-ae37-69400920f836" (UID: "75c4ac64-8285-4e6d-ae37-69400920f836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:44:13 crc kubenswrapper[4780]: I0219 09:44:13.816883 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c4ac64-8285-4e6d-ae37-69400920f836-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.114718 4780 generic.go:334] "Generic (PLEG): container finished" podID="75c4ac64-8285-4e6d-ae37-69400920f836" containerID="bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af" exitCode=0 Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.114788 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerDied","Data":"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af"} Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.114835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9v58" event={"ID":"75c4ac64-8285-4e6d-ae37-69400920f836","Type":"ContainerDied","Data":"0327212d7f2ccb7c827cdc750159c842d0d1e25dcfcfdb8a94cbb7b5f1c5bb4f"} Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.114837 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9v58" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.114864 4780 scope.go:117] "RemoveContainer" containerID="bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.144601 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.151464 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b9v58"] Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.155435 4780 scope.go:117] "RemoveContainer" containerID="35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.195434 4780 scope.go:117] "RemoveContainer" containerID="04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.239433 4780 scope.go:117] "RemoveContainer" containerID="bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af" Feb 19 09:44:14 crc kubenswrapper[4780]: E0219 09:44:14.240092 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af\": container with ID starting with bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af not found: ID does not exist" containerID="bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.240201 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af"} err="failed to get container status \"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af\": rpc error: code = NotFound desc = could not find container \"bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af\": container with ID starting with bed25de7282f616e879c55b5d43564e01cedf892376d7c0b6787aaf377fbb0af not found: ID does not exist" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.240231 4780 scope.go:117] "RemoveContainer" containerID="35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859" Feb 19 09:44:14 crc kubenswrapper[4780]: E0219 09:44:14.240581 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859\": container with ID starting with 35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859 not found: ID does not exist" containerID="35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.240629 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859"} err="failed to get container status \"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859\": rpc error: code = NotFound desc = could not find container \"35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859\": container with ID starting with 35d28cee2f28129770ee576f6c661579f193a828a7868381de3bcad23ce50859 not found: ID does not exist" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.240657 4780 scope.go:117] "RemoveContainer" containerID="04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c" Feb 19 09:44:14 crc kubenswrapper[4780]: E0219 09:44:14.241891 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c\": container with ID starting with 04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c not found: ID does not exist" containerID="04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c" Feb 19 09:44:14 crc kubenswrapper[4780]: I0219 09:44:14.242071 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c"} err="failed to get container status \"04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c\": rpc error: code = NotFound desc = could not find container \"04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c\": container with ID starting with 04dcd30ef5cae2d938bba1cd1f0b7e640ab90c4d724859760c7c7d551e41288c not found: ID does not exist" Feb 19 09:44:15 crc kubenswrapper[4780]: I0219 09:44:15.949499 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" path="/var/lib/kubelet/pods/75c4ac64-8285-4e6d-ae37-69400920f836/volumes" Feb 19 09:44:21 crc kubenswrapper[4780]: I0219 09:44:21.938687 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:44:21 crc kubenswrapper[4780]: E0219 09:44:21.939463 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:33 crc kubenswrapper[4780]: I0219 09:44:33.938660 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:44:33 crc kubenswrapper[4780]: E0219 09:44:33.939671 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:44 crc kubenswrapper[4780]: I0219 09:44:44.937823 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:44:44 crc kubenswrapper[4780]: E0219 09:44:44.938568 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.287239 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 09:44:46 crc kubenswrapper[4780]: E0219 09:44:46.287898 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" containerName="mariadb-client" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.287912 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" containerName="mariadb-client" Feb 19 09:44:46 crc kubenswrapper[4780]: E0219 09:44:46.287944 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="extract-content" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.287952 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="extract-content" Feb 19 09:44:46 crc kubenswrapper[4780]: E0219 09:44:46.287974 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="registry-server" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.287980 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="registry-server" Feb 19 09:44:46 crc kubenswrapper[4780]: E0219 09:44:46.288020 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="extract-utilities" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.288027 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="extract-utilities" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.288233 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="161f87e9-9bb6-4cf4-85ad-44d24ce37586" containerName="mariadb-client" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.288245 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c4ac64-8285-4e6d-ae37-69400920f836" containerName="registry-server" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.289187 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.292112 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.293712 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ff5dh" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.294290 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.309679 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.317344 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.319463 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.340054 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.341387 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366169 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366236 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366262 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366288 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366324 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366349 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366373 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366392 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbtl\" (UniqueName: \"kubernetes.io/projected/68f9285d-acc2-417e-b046-c6991dd305c8-kube-api-access-znbtl\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366411 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299d8395-d188-40f6-8527-b6cfc8084475-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366432 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-config\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmc2k\" (UniqueName: \"kubernetes.io/projected/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-kube-api-access-zmc2k\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366468 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f9285d-acc2-417e-b046-c6991dd305c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d56p\" (UniqueName: \"kubernetes.io/projected/299d8395-d188-40f6-8527-b6cfc8084475-kube-api-access-6d56p\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366532 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/299d8395-d188-40f6-8527-b6cfc8084475-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-config\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366596 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.366613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f9285d-acc2-417e-b046-c6991dd305c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.378883 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.390190 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468371 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468516 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468580 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znbtl\" (UniqueName: \"kubernetes.io/projected/68f9285d-acc2-417e-b046-c6991dd305c8-kube-api-access-znbtl\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468606 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299d8395-d188-40f6-8527-b6cfc8084475-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468636 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-config\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmc2k\" (UniqueName: \"kubernetes.io/projected/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-kube-api-access-zmc2k\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f9285d-acc2-417e-b046-c6991dd305c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468715 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d56p\" (UniqueName: \"kubernetes.io/projected/299d8395-d188-40f6-8527-b6cfc8084475-kube-api-access-6d56p\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/299d8395-d188-40f6-8527-b6cfc8084475-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468786 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468822 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-config\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468846 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468870 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f9285d-acc2-417e-b046-c6991dd305c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468900 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468935 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.468964 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.469723 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.469910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.470294 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f9285d-acc2-417e-b046-c6991dd305c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.470465 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/299d8395-d188-40f6-8527-b6cfc8084475-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.470561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-config\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.470762 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.471028 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.471156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f9285d-acc2-417e-b046-c6991dd305c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.471722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/299d8395-d188-40f6-8527-b6cfc8084475-config\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.472789 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.472819 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0776040efe18f4036014781dda2722e5eb5a96e8c8bfca702aaa455f25838593/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.473363 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.473427 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8994d52cddd3e63b1415a2b6e0626767216b0533ed6daf55df117a39a7ec4f63/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.476507 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f9285d-acc2-417e-b046-c6991dd305c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.485543 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.485584 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5a31fb92adcd882cf0e5af8608453598e9ee6ca5ffa965fc7dc91c82ab367ff/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.489477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299d8395-d188-40f6-8527-b6cfc8084475-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.489772 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.492283 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znbtl\" (UniqueName: \"kubernetes.io/projected/68f9285d-acc2-417e-b046-c6991dd305c8-kube-api-access-znbtl\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.493244 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d56p\" (UniqueName: \"kubernetes.io/projected/299d8395-d188-40f6-8527-b6cfc8084475-kube-api-access-6d56p\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.499617 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.500864 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.502183 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-h7rzf" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.502223 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmc2k\" (UniqueName: \"kubernetes.io/projected/e1b5ec7b-ac41-4149-93ae-7240d7bf6008-kube-api-access-zmc2k\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.506549 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.515223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.519334 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.523036 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.546672 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.565045 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.566478 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.572542 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.589998 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-594e1f32-4ab5-4106-8fc5-7038202e78da\") pod \"ovsdbserver-nb-1\" (UID: \"68f9285d-acc2-417e-b046-c6991dd305c8\") " pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.590440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35e724d8-76ee-4867-9535-8b5dd49e1b4f\") pod \"ovsdbserver-nb-2\" (UID: \"e1b5ec7b-ac41-4149-93ae-7240d7bf6008\") " pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.596937 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.598377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9edc0cc-9639-4e79-9998-3c95d1bb9380\") pod \"ovsdbserver-nb-0\" (UID: \"299d8395-d188-40f6-8527-b6cfc8084475\") " pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.615742 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.640893 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.655048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.672514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-config\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400a0a9a-193f-4191-aff8-2549e9f04533-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcbf7\" (UniqueName: \"kubernetes.io/projected/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-kube-api-access-xcbf7\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25c6eb26-4b88-4ced-a69d-0527da97ed8e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673196 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfdp\" (UniqueName: \"kubernetes.io/projected/400a0a9a-193f-4191-aff8-2549e9f04533-kube-api-access-6kfdp\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673234 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673262 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c6eb26-4b88-4ced-a69d-0527da97ed8e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673331 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-config\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673352 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673437 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673473 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/400a0a9a-193f-4191-aff8-2549e9f04533-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673493 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673517 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn7s\" (UniqueName: \"kubernetes.io/projected/25c6eb26-4b88-4ced-a69d-0527da97ed8e-kube-api-access-7hn7s\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-config\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.673580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-config\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775739 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775768 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400a0a9a-193f-4191-aff8-2549e9f04533-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775793 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcbf7\" (UniqueName: \"kubernetes.io/projected/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-kube-api-access-xcbf7\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25c6eb26-4b88-4ced-a69d-0527da97ed8e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfdp\" (UniqueName: \"kubernetes.io/projected/400a0a9a-193f-4191-aff8-2549e9f04533-kube-api-access-6kfdp\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775941 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c6eb26-4b88-4ced-a69d-0527da97ed8e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.775979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776009 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-config\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776029 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776083 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/400a0a9a-193f-4191-aff8-2549e9f04533-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776164 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776187 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hn7s\" (UniqueName: \"kubernetes.io/projected/25c6eb26-4b88-4ced-a69d-0527da97ed8e-kube-api-access-7hn7s\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-config\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.776496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-config\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.777418 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.777497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-config\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.777622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.777970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/400a0a9a-193f-4191-aff8-2549e9f04533-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.778093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/400a0a9a-193f-4191-aff8-2549e9f04533-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.778277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25c6eb26-4b88-4ced-a69d-0527da97ed8e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.778623 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-config\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.779362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c6eb26-4b88-4ced-a69d-0527da97ed8e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.781083 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.781120 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75e358ae5150e5e9ece54e8c167cc5fe6f1d3c9c90baab10a81898d7a67a871d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.781558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400a0a9a-193f-4191-aff8-2549e9f04533-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.781807 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.781963 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.782035 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be4366c51423745d694f521c666e3ef5b3a3b4b89d5279c97fca627bcb160a01/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.782829 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c6eb26-4b88-4ced-a69d-0527da97ed8e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.788636 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.788663 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5fb6b8593e83b1cc23f683df65a9b615505eada56656ffde5fdd4534eba0503/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.795159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hn7s\" (UniqueName: \"kubernetes.io/projected/25c6eb26-4b88-4ced-a69d-0527da97ed8e-kube-api-access-7hn7s\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.797102 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcbf7\" (UniqueName: \"kubernetes.io/projected/fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b-kube-api-access-xcbf7\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.798767 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfdp\" (UniqueName: \"kubernetes.io/projected/400a0a9a-193f-4191-aff8-2549e9f04533-kube-api-access-6kfdp\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.817825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9ded95-2b9c-4712-8fad-2a06c3df138c\") pod \"ovsdbserver-sb-0\" (UID: \"25c6eb26-4b88-4ced-a69d-0527da97ed8e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.822344 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa7e4a2a-7834-4f09-86ab-638e9fa00ed1\") pod \"ovsdbserver-sb-2\" (UID: \"400a0a9a-193f-4191-aff8-2549e9f04533\") " pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.825447 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72bc80ee-c37b-4076-90e0-c1247ce34662\") pod \"ovsdbserver-sb-1\" (UID: \"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b\") " pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.908236 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.924402 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:46 crc kubenswrapper[4780]: I0219 09:44:46.916690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.175404 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.277772 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.373261 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.399204 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e1b5ec7b-ac41-4149-93ae-7240d7bf6008","Type":"ContainerStarted","Data":"a626a1239f0372bf2a5c1e8241a7f30762621f8b6b4f91a75889c7b7545953c5"} Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.401280 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"299d8395-d188-40f6-8527-b6cfc8084475","Type":"ContainerStarted","Data":"5fa9035859d253ad34acc396f8331e08f308b666ce8d757f84ddb907d6681878"} Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.482827 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 09:44:47 crc kubenswrapper[4780]: I0219 09:44:47.576079 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.410701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"68f9285d-acc2-417e-b046-c6991dd305c8","Type":"ContainerStarted","Data":"79fc1961fad0d5408165999d493e68f90710dd0bdcfec39f63470ddfaad1e329"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.412059 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"68f9285d-acc2-417e-b046-c6991dd305c8","Type":"ContainerStarted","Data":"d5c94184855bc2c4533ffd8bf1f4ef3df1124658c1d2162ee5cc7c4111d5ad00"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.412201 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"68f9285d-acc2-417e-b046-c6991dd305c8","Type":"ContainerStarted","Data":"1d284540a8c9a14140adf7bfd5cb9416f11226949f31d7ec7983a5274da8e52a"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.412850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"400a0a9a-193f-4191-aff8-2549e9f04533","Type":"ContainerStarted","Data":"602caf77a0db7c2f6ba3633c7d555472b2f6a1191dea153753c4ded1fc44b743"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.412892 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"400a0a9a-193f-4191-aff8-2549e9f04533","Type":"ContainerStarted","Data":"62b4e72d6e09b2fe7a1f3e189cc322bc6670fdc398d892a87a57d40750909f59"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.412904 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"400a0a9a-193f-4191-aff8-2549e9f04533","Type":"ContainerStarted","Data":"5b7d4ccb4c64857fa491ea39fbaf4670659530a85747a5ba2329651e1b4e1186"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.415077 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"299d8395-d188-40f6-8527-b6cfc8084475","Type":"ContainerStarted","Data":"ec0e2bfc164ca7fe7cd2ba766eebd51965f4c5a8f9ca90042dbc41b0cb5e63e3"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.415204 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"299d8395-d188-40f6-8527-b6cfc8084475","Type":"ContainerStarted","Data":"b483556dd9736a8b27fe3345ab7abc7d2028c1e284001f5458840d696ca94a08"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.416976 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25c6eb26-4b88-4ced-a69d-0527da97ed8e","Type":"ContainerStarted","Data":"00edc42bbbcc64d7bf694b0825fe6470611eb73e50cdd41381841ff1d7379904"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.417013 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25c6eb26-4b88-4ced-a69d-0527da97ed8e","Type":"ContainerStarted","Data":"8543ec2927266a73a92014f034082ff52e9b2964ae18960ae9f719e0abb18050"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.417029 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25c6eb26-4b88-4ced-a69d-0527da97ed8e","Type":"ContainerStarted","Data":"882284bad2a547dfba9623896582d9b0044a8f26348d299b8a568009c53ac80e"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.419341 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e1b5ec7b-ac41-4149-93ae-7240d7bf6008","Type":"ContainerStarted","Data":"4601e5ef42d54952ae0aacfa293409a5fb60bf4c81dcc6533ff74e12dca0ce42"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.419390 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e1b5ec7b-ac41-4149-93ae-7240d7bf6008","Type":"ContainerStarted","Data":"e56239a0da8b83fac863ad8b3d092d316ea11adfd733289df0d7cd33d433bc27"} Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.441656 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.441628932 podStartE2EDuration="3.441628932s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:48.428972663 +0000 UTC m=+5031.172630102" watchObservedRunningTime="2026-02-19 09:44:48.441628932 +0000 UTC m=+5031.185286391" Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.454014 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.4539978639999998 podStartE2EDuration="3.453997864s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:48.447596618 +0000 UTC m=+5031.191254067" watchObservedRunningTime="2026-02-19 09:44:48.453997864 +0000 UTC m=+5031.197655303" Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.466267 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.466250793 podStartE2EDuration="3.466250793s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:48.465431133 +0000 UTC m=+5031.209088602" watchObservedRunningTime="2026-02-19 09:44:48.466250793 +0000 UTC m=+5031.209908242" Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.489514 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.489493071 podStartE2EDuration="3.489493071s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:48.485979955 +0000 UTC m=+5031.229637414" watchObservedRunningTime="2026-02-19 09:44:48.489493071 +0000 UTC m=+5031.233150520" Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.517466 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.517448013 podStartE2EDuration="3.517448013s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:48.506803603 +0000 UTC m=+5031.250461062" watchObservedRunningTime="2026-02-19 09:44:48.517448013 +0000 UTC m=+5031.261105472" Feb 19 09:44:48 crc kubenswrapper[4780]: I0219 09:44:48.695510 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.429845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b","Type":"ContainerStarted","Data":"e38bd47dfc3ebca95c4fc7656b3f059200418ef97efdadf119d802bbca372955"} Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.430271 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b","Type":"ContainerStarted","Data":"62ad6e90dea463add4494c729b8ae7f719f68985b656f1801101c1dbb46d50e6"} Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.430286 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b","Type":"ContainerStarted","Data":"de71de74634e601049b9c0f04ce958cbe1d8a8b13e71b4951a1f6e14c03bbb55"} Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.451783 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.451756186 podStartE2EDuration="4.451756186s" podCreationTimestamp="2026-02-19 09:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:49.451322136 +0000 UTC m=+5032.194979615" watchObservedRunningTime="2026-02-19 09:44:49.451756186 +0000 UTC m=+5032.195413675" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.617218 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.641459 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.655984 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.909234 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.917519 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:49 crc kubenswrapper[4780]: I0219 09:44:49.924720 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.616678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.641251 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.655892 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.908768 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.918028 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:51 crc kubenswrapper[4780]: I0219 09:44:51.924524 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.662754 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.717447 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.730509 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.753434 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.808446 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 19 09:44:52 crc kubenswrapper[4780]: I0219 09:44:52.991430 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.017365 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.037205 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.057669 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.070137 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.071482 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.073200 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.076175 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.107627 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.192395 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.192640 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.192881 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.193003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wd5\" (UniqueName: \"kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.294447 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.294838 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.294895 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wd5\" (UniqueName: \"kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.294932 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.295351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.295801 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.295949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.322843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wd5\" (UniqueName: \"kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5\") pod \"dnsmasq-dns-697b8d7675-hktm7\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.390051 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.413059 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.442730 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.448768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.455335 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.463694 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.501500 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb9l\" (UniqueName: \"kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.501567 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.501603 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.501683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.501728 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.520495 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.525818 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.602989 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb9l\" (UniqueName: \"kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.603030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.603053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.603149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.603226 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.604872 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.605759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.606313 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.606710 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.624568 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb9l\" (UniqueName: \"kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l\") pod \"dnsmasq-dns-55cb6fc89-fbcvt\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.800664 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:53 crc kubenswrapper[4780]: I0219 09:44:53.900012 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:53 crc kubenswrapper[4780]: W0219 09:44:53.903491 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0165cfba_3421_42e6_a9bd_c5ab0793035d.slice/crio-6069c9a04dc6c5bc1dbc150989b489f0c26d9df086fe3b5ca09084f36ab3e612 WatchSource:0}: Error finding container 6069c9a04dc6c5bc1dbc150989b489f0c26d9df086fe3b5ca09084f36ab3e612: Status 404 returned error can't find the container with id 6069c9a04dc6c5bc1dbc150989b489f0c26d9df086fe3b5ca09084f36ab3e612 Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.251677 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:44:54 crc kubenswrapper[4780]: W0219 09:44:54.255181 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8885b48d_867b_4e74_820b_d0fd765b4006.slice/crio-e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802 WatchSource:0}: Error finding container e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802: Status 404 returned error can't find the container with id e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802 Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.487805 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerStarted","Data":"dfa15500c2748fc4b25061ea04d9ee37d4ab13537453830df9dade87569d258e"} Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.487853 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerStarted","Data":"e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802"} Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.490835 4780 generic.go:334] "Generic (PLEG): container finished" podID="0165cfba-3421-42e6-a9bd-c5ab0793035d" containerID="cc75158d5db948b0cb12721a854de0af4aaab50920e1e2cacaadd02b165823e7" exitCode=0 Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.490969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" event={"ID":"0165cfba-3421-42e6-a9bd-c5ab0793035d","Type":"ContainerDied","Data":"cc75158d5db948b0cb12721a854de0af4aaab50920e1e2cacaadd02b165823e7"} Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.490995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" event={"ID":"0165cfba-3421-42e6-a9bd-c5ab0793035d","Type":"ContainerStarted","Data":"6069c9a04dc6c5bc1dbc150989b489f0c26d9df086fe3b5ca09084f36ab3e612"} Feb 19 09:44:54 crc kubenswrapper[4780]: I0219 09:44:54.865618 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.031529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config\") pod \"0165cfba-3421-42e6-a9bd-c5ab0793035d\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.031645 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47wd5\" (UniqueName: \"kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5\") pod \"0165cfba-3421-42e6-a9bd-c5ab0793035d\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.031805 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc\") pod \"0165cfba-3421-42e6-a9bd-c5ab0793035d\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.032602 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb\") pod \"0165cfba-3421-42e6-a9bd-c5ab0793035d\" (UID: \"0165cfba-3421-42e6-a9bd-c5ab0793035d\") " Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.039376 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5" (OuterVolumeSpecName: "kube-api-access-47wd5") pod "0165cfba-3421-42e6-a9bd-c5ab0793035d" (UID: "0165cfba-3421-42e6-a9bd-c5ab0793035d"). InnerVolumeSpecName "kube-api-access-47wd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.072879 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0165cfba-3421-42e6-a9bd-c5ab0793035d" (UID: "0165cfba-3421-42e6-a9bd-c5ab0793035d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.075936 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config" (OuterVolumeSpecName: "config") pod "0165cfba-3421-42e6-a9bd-c5ab0793035d" (UID: "0165cfba-3421-42e6-a9bd-c5ab0793035d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.081096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0165cfba-3421-42e6-a9bd-c5ab0793035d" (UID: "0165cfba-3421-42e6-a9bd-c5ab0793035d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.136053 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.136158 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47wd5\" (UniqueName: \"kubernetes.io/projected/0165cfba-3421-42e6-a9bd-c5ab0793035d-kube-api-access-47wd5\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.136186 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.136210 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0165cfba-3421-42e6-a9bd-c5ab0793035d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.499462 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" event={"ID":"0165cfba-3421-42e6-a9bd-c5ab0793035d","Type":"ContainerDied","Data":"6069c9a04dc6c5bc1dbc150989b489f0c26d9df086fe3b5ca09084f36ab3e612"} Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.499735 4780 scope.go:117] "RemoveContainer" containerID="cc75158d5db948b0cb12721a854de0af4aaab50920e1e2cacaadd02b165823e7" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.499501 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-hktm7" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.501078 4780 generic.go:334] "Generic (PLEG): container finished" podID="8885b48d-867b-4e74-820b-d0fd765b4006" containerID="dfa15500c2748fc4b25061ea04d9ee37d4ab13537453830df9dade87569d258e" exitCode=0 Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.501111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerDied","Data":"dfa15500c2748fc4b25061ea04d9ee37d4ab13537453830df9dade87569d258e"} Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.501186 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerStarted","Data":"ba04419b9adec605d045d549d9ef6588b94921edabf7e28695cd8158b13dac31"} Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.501226 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.531740 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" podStartSLOduration=2.531724124 podStartE2EDuration="2.531724124s" podCreationTimestamp="2026-02-19 09:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:55.530696379 +0000 UTC m=+5038.274353838" watchObservedRunningTime="2026-02-19 09:44:55.531724124 +0000 UTC m=+5038.275381573" Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.575101 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.585176 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-hktm7"] Feb 19 09:44:55 crc kubenswrapper[4780]: I0219 09:44:55.960306 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0165cfba-3421-42e6-a9bd-c5ab0793035d" path="/var/lib/kubelet/pods/0165cfba-3421-42e6-a9bd-c5ab0793035d/volumes" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.486383 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 19 09:44:56 crc kubenswrapper[4780]: E0219 09:44:56.487197 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0165cfba-3421-42e6-a9bd-c5ab0793035d" containerName="init" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.487239 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0165cfba-3421-42e6-a9bd-c5ab0793035d" containerName="init" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.487699 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0165cfba-3421-42e6-a9bd-c5ab0793035d" containerName="init" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.488837 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.493459 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.493627 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.668759 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.668866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4h2\" (UniqueName: \"kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.668912 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.770541 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.770850 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4h2\" (UniqueName: \"kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.770891 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.773276 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.773303 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/302cd5de1940a4a1949a60fe54b76f96d8061ab4af61bed1c87d667ea22fa773/globalmount\"" pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.777505 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.791065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4h2\" (UniqueName: \"kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.809962 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") pod \"ovn-copy-data\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " pod="openstack/ovn-copy-data" Feb 19 09:44:56 crc kubenswrapper[4780]: I0219 09:44:56.820020 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 09:44:57 crc kubenswrapper[4780]: I0219 09:44:57.341258 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 09:44:57 crc kubenswrapper[4780]: W0219 09:44:57.341378 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeabdc62_9466_4b98_8f24_a68fefea15ee.slice/crio-114c851f0e56e6a3a58cf165a809856d194ce4b4322a807a6dabe63c0c252278 WatchSource:0}: Error finding container 114c851f0e56e6a3a58cf165a809856d194ce4b4322a807a6dabe63c0c252278: Status 404 returned error can't find the container with id 114c851f0e56e6a3a58cf165a809856d194ce4b4322a807a6dabe63c0c252278 Feb 19 09:44:57 crc kubenswrapper[4780]: I0219 09:44:57.533189 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"beabdc62-9466-4b98-8f24-a68fefea15ee","Type":"ContainerStarted","Data":"114c851f0e56e6a3a58cf165a809856d194ce4b4322a807a6dabe63c0c252278"} Feb 19 09:44:57 crc kubenswrapper[4780]: I0219 09:44:57.949450 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:44:57 crc kubenswrapper[4780]: E0219 09:44:57.950004 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:44:58 crc kubenswrapper[4780]: I0219 09:44:58.546883 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"beabdc62-9466-4b98-8f24-a68fefea15ee","Type":"ContainerStarted","Data":"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14"} Feb 19 09:44:58 crc kubenswrapper[4780]: I0219 09:44:58.571294 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.050585892 podStartE2EDuration="3.571255531s" podCreationTimestamp="2026-02-19 09:44:55 +0000 UTC" firstStartedPulling="2026-02-19 09:44:57.343075284 +0000 UTC m=+5040.086732743" lastFinishedPulling="2026-02-19 09:44:57.863744893 +0000 UTC m=+5040.607402382" observedRunningTime="2026-02-19 09:44:58.563279567 +0000 UTC m=+5041.306937106" watchObservedRunningTime="2026-02-19 09:44:58.571255531 +0000 UTC m=+5041.314913030" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.144269 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd"] Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.147227 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.150151 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.150313 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.163819 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd"] Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.232532 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.232684 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66md\" (UniqueName: \"kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.233117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.335180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.335366 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.335453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66md\" (UniqueName: \"kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.337413 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.351359 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.369759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66md\" (UniqueName: \"kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md\") pod \"collect-profiles-29524905-8t4kd\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.466670 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:00 crc kubenswrapper[4780]: I0219 09:45:00.808866 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd"] Feb 19 09:45:01 crc kubenswrapper[4780]: I0219 09:45:01.581515 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac30c9c9-f698-4b4a-844c-34d5e9e13f80" containerID="47ab9826a3da9a7543e2c13731aea3d69002817a516873c0f8058beddbed1979" exitCode=0 Feb 19 09:45:01 crc kubenswrapper[4780]: I0219 09:45:01.581616 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" event={"ID":"ac30c9c9-f698-4b4a-844c-34d5e9e13f80","Type":"ContainerDied","Data":"47ab9826a3da9a7543e2c13731aea3d69002817a516873c0f8058beddbed1979"} Feb 19 09:45:01 crc kubenswrapper[4780]: I0219 09:45:01.581875 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" event={"ID":"ac30c9c9-f698-4b4a-844c-34d5e9e13f80","Type":"ContainerStarted","Data":"66d02417b965f7833105e7b7530f18d0a2438e5b9e50a2e638fe56dafcf8b782"} Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.911703 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.986253 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume\") pod \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.986361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume\") pod \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.986453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n66md\" (UniqueName: \"kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md\") pod \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\" (UID: \"ac30c9c9-f698-4b4a-844c-34d5e9e13f80\") " Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.987092 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac30c9c9-f698-4b4a-844c-34d5e9e13f80" (UID: "ac30c9c9-f698-4b4a-844c-34d5e9e13f80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.991056 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac30c9c9-f698-4b4a-844c-34d5e9e13f80" (UID: "ac30c9c9-f698-4b4a-844c-34d5e9e13f80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:02 crc kubenswrapper[4780]: I0219 09:45:02.991091 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md" (OuterVolumeSpecName: "kube-api-access-n66md") pod "ac30c9c9-f698-4b4a-844c-34d5e9e13f80" (UID: "ac30c9c9-f698-4b4a-844c-34d5e9e13f80"). InnerVolumeSpecName "kube-api-access-n66md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.088739 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.088789 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n66md\" (UniqueName: \"kubernetes.io/projected/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-kube-api-access-n66md\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.088805 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac30c9c9-f698-4b4a-844c-34d5e9e13f80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.620254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" event={"ID":"ac30c9c9-f698-4b4a-844c-34d5e9e13f80","Type":"ContainerDied","Data":"66d02417b965f7833105e7b7530f18d0a2438e5b9e50a2e638fe56dafcf8b782"} Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.620313 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.620338 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d02417b965f7833105e7b7530f18d0a2438e5b9e50a2e638fe56dafcf8b782" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.803386 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.883914 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.884433 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="dnsmasq-dns" containerID="cri-o://107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac" gracePeriod=10 Feb 19 09:45:03 crc kubenswrapper[4780]: I0219 09:45:03.989898 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp"] Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:03.996450 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524860-h9kjp"] Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.094394 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 09:45:04 crc kubenswrapper[4780]: E0219 09:45:04.094750 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac30c9c9-f698-4b4a-844c-34d5e9e13f80" containerName="collect-profiles" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.094773 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac30c9c9-f698-4b4a-844c-34d5e9e13f80" containerName="collect-profiles" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.094909 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac30c9c9-f698-4b4a-844c-34d5e9e13f80" containerName="collect-profiles" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.095732 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.102996 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.103199 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.103346 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hxf9x" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.112575 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.241397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.241468 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmm26\" (UniqueName: \"kubernetes.io/projected/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-kube-api-access-zmm26\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.241632 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.241737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-scripts\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.241780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-config\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.342710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-config\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.342800 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.342845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmm26\" (UniqueName: \"kubernetes.io/projected/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-kube-api-access-zmm26\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.342886 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.342908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-scripts\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.343689 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-scripts\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.343728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-config\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.344652 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.349377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.398394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmm26\" (UniqueName: \"kubernetes.io/projected/c1e3aa9e-9dc2-4815-b2e1-9707609725ea-kube-api-access-zmm26\") pod \"ovn-northd-0\" (UID: \"c1e3aa9e-9dc2-4815-b2e1-9707609725ea\") " pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.410637 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.457741 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.634869 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5071553-02b5-42e0-ab21-b865624efbb3" containerID="107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac" exitCode=0 Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.634918 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" event={"ID":"a5071553-02b5-42e0-ab21-b865624efbb3","Type":"ContainerDied","Data":"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac"} Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.634948 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" event={"ID":"a5071553-02b5-42e0-ab21-b865624efbb3","Type":"ContainerDied","Data":"acf9bae6a1dd719bb2f678c7d29834d239f251ea7eb8cba56a06d7f84ca76b4c"} Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.634967 4780 scope.go:117] "RemoveContainer" containerID="107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.635111 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-hxbzv" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.647386 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwwk\" (UniqueName: \"kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk\") pod \"a5071553-02b5-42e0-ab21-b865624efbb3\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.647529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config\") pod \"a5071553-02b5-42e0-ab21-b865624efbb3\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.647555 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc\") pod \"a5071553-02b5-42e0-ab21-b865624efbb3\" (UID: \"a5071553-02b5-42e0-ab21-b865624efbb3\") " Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.652025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk" (OuterVolumeSpecName: "kube-api-access-2nwwk") pod "a5071553-02b5-42e0-ab21-b865624efbb3" (UID: "a5071553-02b5-42e0-ab21-b865624efbb3"). InnerVolumeSpecName "kube-api-access-2nwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.652744 4780 scope.go:117] "RemoveContainer" containerID="2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.691344 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config" (OuterVolumeSpecName: "config") pod "a5071553-02b5-42e0-ab21-b865624efbb3" (UID: "a5071553-02b5-42e0-ab21-b865624efbb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.696598 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5071553-02b5-42e0-ab21-b865624efbb3" (UID: "a5071553-02b5-42e0-ab21-b865624efbb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.711329 4780 scope.go:117] "RemoveContainer" containerID="107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac" Feb 19 09:45:04 crc kubenswrapper[4780]: E0219 09:45:04.712055 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac\": container with ID starting with 107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac not found: ID does not exist" containerID="107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.712117 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac"} err="failed to get container status \"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac\": rpc error: code = NotFound desc = could not find container \"107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac\": container with ID starting with 107f0cea40edc48f393c1200feff14362411f1412d9ab61719f635f220b491ac not found: ID does not exist" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.712168 4780 scope.go:117] "RemoveContainer" containerID="2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5" Feb 19 09:45:04 crc kubenswrapper[4780]: E0219 09:45:04.713746 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5\": container with ID starting with 2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5 not found: ID does not exist" containerID="2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.713790 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5"} err="failed to get container status \"2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5\": rpc error: code = NotFound desc = could not find container \"2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5\": container with ID starting with 2f008ac8adb3b87619c253e409bc46c9d7f4f1f9fd964ec19e7fa6286ac550f5 not found: ID does not exist" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.749308 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.749336 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5071553-02b5-42e0-ab21-b865624efbb3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.749345 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwwk\" (UniqueName: \"kubernetes.io/projected/a5071553-02b5-42e0-ab21-b865624efbb3-kube-api-access-2nwwk\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:04 crc kubenswrapper[4780]: I0219 09:45:04.980143 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.003187 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-hxbzv"] Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.046026 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.651332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e3aa9e-9dc2-4815-b2e1-9707609725ea","Type":"ContainerStarted","Data":"03ecef8ac18e8a39884dde414417cb7547dfe24bd7aafb26e00de514358f6335"} Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.651699 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.651722 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e3aa9e-9dc2-4815-b2e1-9707609725ea","Type":"ContainerStarted","Data":"7569c3146868ba0c806cede752a0cc258c9aeb57bf82de294c08cf8d54a6431c"} Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.651741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e3aa9e-9dc2-4815-b2e1-9707609725ea","Type":"ContainerStarted","Data":"73ed73d05dcd303eeef993b236b867403fd66069c1c5833becf95a8e24fae7f4"} Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.695398 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.695378673 podStartE2EDuration="1.695378673s" podCreationTimestamp="2026-02-19 09:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:05.68093232 +0000 UTC m=+5048.424589779" watchObservedRunningTime="2026-02-19 09:45:05.695378673 +0000 UTC m=+5048.439036122" Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.956661 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" path="/var/lib/kubelet/pods/a5071553-02b5-42e0-ab21-b865624efbb3/volumes" Feb 19 09:45:05 crc kubenswrapper[4780]: I0219 09:45:05.958394 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca03495-9de0-43ec-b0bf-11dfc5fc8d70" path="/var/lib/kubelet/pods/dca03495-9de0-43ec-b0bf-11dfc5fc8d70/volumes" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.636458 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6v8gc"] Feb 19 09:45:09 crc kubenswrapper[4780]: E0219 09:45:09.637067 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="dnsmasq-dns" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.637083 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="dnsmasq-dns" Feb 19 09:45:09 crc kubenswrapper[4780]: E0219 09:45:09.637107 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="init" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.637115 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="init" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.637313 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5071553-02b5-42e0-ab21-b865624efbb3" containerName="dnsmasq-dns" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.637909 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.648953 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-eac9-account-create-update-7q7gr"] Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.650245 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.687217 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6v8gc"] Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.695028 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eac9-account-create-update-7q7gr"] Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.698290 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.746243 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.746312 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.746400 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htr5p\" (UniqueName: \"kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.746635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htwc\" (UniqueName: \"kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.848343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.848620 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.848649 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htr5p\" (UniqueName: \"kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.848703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7htwc\" (UniqueName: \"kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.849304 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.849613 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.871394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htr5p\" (UniqueName: \"kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p\") pod \"keystone-eac9-account-create-update-7q7gr\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:09 crc kubenswrapper[4780]: I0219 09:45:09.875789 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7htwc\" (UniqueName: \"kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc\") pod \"keystone-db-create-6v8gc\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.011450 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.027379 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.521884 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6v8gc"] Feb 19 09:45:10 crc kubenswrapper[4780]: W0219 09:45:10.522475 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562e7a0b_d308_4e21_9164_8ef5ff574af9.slice/crio-27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7 WatchSource:0}: Error finding container 27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7: Status 404 returned error can't find the container with id 27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7 Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.574324 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eac9-account-create-update-7q7gr"] Feb 19 09:45:10 crc kubenswrapper[4780]: W0219 09:45:10.578928 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a944e3_b129_4cc4_9792_e07bb079f89c.slice/crio-256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1 WatchSource:0}: Error finding container 256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1: Status 404 returned error can't find the container with id 256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1 Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.710090 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6v8gc" event={"ID":"562e7a0b-d308-4e21-9164-8ef5ff574af9","Type":"ContainerStarted","Data":"27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7"} Feb 19 09:45:10 crc kubenswrapper[4780]: I0219 09:45:10.711579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eac9-account-create-update-7q7gr" event={"ID":"e4a944e3-b129-4cc4-9792-e07bb079f89c","Type":"ContainerStarted","Data":"256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1"} Feb 19 09:45:11 crc kubenswrapper[4780]: I0219 09:45:11.722975 4780 generic.go:334] "Generic (PLEG): container finished" podID="562e7a0b-d308-4e21-9164-8ef5ff574af9" containerID="b573016ed4567b6cb698488ebbdbb00c2b4294f9aa3d6ca315dc9066e4b88a24" exitCode=0 Feb 19 09:45:11 crc kubenswrapper[4780]: I0219 09:45:11.723296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6v8gc" event={"ID":"562e7a0b-d308-4e21-9164-8ef5ff574af9","Type":"ContainerDied","Data":"b573016ed4567b6cb698488ebbdbb00c2b4294f9aa3d6ca315dc9066e4b88a24"} Feb 19 09:45:11 crc kubenswrapper[4780]: I0219 09:45:11.725406 4780 generic.go:334] "Generic (PLEG): container finished" podID="e4a944e3-b129-4cc4-9792-e07bb079f89c" containerID="920f2d5e33fb27396faffd182eacfc25ea97e70eb90883b34936198cc54df173" exitCode=0 Feb 19 09:45:11 crc kubenswrapper[4780]: I0219 09:45:11.725434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eac9-account-create-update-7q7gr" event={"ID":"e4a944e3-b129-4cc4-9792-e07bb079f89c","Type":"ContainerDied","Data":"920f2d5e33fb27396faffd182eacfc25ea97e70eb90883b34936198cc54df173"} Feb 19 09:45:11 crc kubenswrapper[4780]: I0219 09:45:11.938833 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:45:11 crc kubenswrapper[4780]: E0219 09:45:11.939340 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.138690 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.224193 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.303949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7htwc\" (UniqueName: \"kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc\") pod \"562e7a0b-d308-4e21-9164-8ef5ff574af9\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.304082 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts\") pod \"562e7a0b-d308-4e21-9164-8ef5ff574af9\" (UID: \"562e7a0b-d308-4e21-9164-8ef5ff574af9\") " Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.304813 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "562e7a0b-d308-4e21-9164-8ef5ff574af9" (UID: "562e7a0b-d308-4e21-9164-8ef5ff574af9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.313842 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc" (OuterVolumeSpecName: "kube-api-access-7htwc") pod "562e7a0b-d308-4e21-9164-8ef5ff574af9" (UID: "562e7a0b-d308-4e21-9164-8ef5ff574af9"). InnerVolumeSpecName "kube-api-access-7htwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.405698 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts\") pod \"e4a944e3-b129-4cc4-9792-e07bb079f89c\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.405835 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htr5p\" (UniqueName: \"kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p\") pod \"e4a944e3-b129-4cc4-9792-e07bb079f89c\" (UID: \"e4a944e3-b129-4cc4-9792-e07bb079f89c\") " Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.406512 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4a944e3-b129-4cc4-9792-e07bb079f89c" (UID: "e4a944e3-b129-4cc4-9792-e07bb079f89c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.406802 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562e7a0b-d308-4e21-9164-8ef5ff574af9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.406835 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a944e3-b129-4cc4-9792-e07bb079f89c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.406856 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7htwc\" (UniqueName: \"kubernetes.io/projected/562e7a0b-d308-4e21-9164-8ef5ff574af9-kube-api-access-7htwc\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.410157 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p" (OuterVolumeSpecName: "kube-api-access-htr5p") pod "e4a944e3-b129-4cc4-9792-e07bb079f89c" (UID: "e4a944e3-b129-4cc4-9792-e07bb079f89c"). InnerVolumeSpecName "kube-api-access-htr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.508303 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htr5p\" (UniqueName: \"kubernetes.io/projected/e4a944e3-b129-4cc4-9792-e07bb079f89c-kube-api-access-htr5p\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.747618 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eac9-account-create-update-7q7gr" event={"ID":"e4a944e3-b129-4cc4-9792-e07bb079f89c","Type":"ContainerDied","Data":"256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1"} Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.748335 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256feb3aaefeb6faddbffa9a2213d2ce796a306380240ef20d5cc949f56665a1" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.748010 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eac9-account-create-update-7q7gr" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.749816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6v8gc" event={"ID":"562e7a0b-d308-4e21-9164-8ef5ff574af9","Type":"ContainerDied","Data":"27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7"} Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.749848 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f8c99873f1c1e6319e378fea17738c42820c9e7e62da99bc1dc8bcf101b7b7" Feb 19 09:45:13 crc kubenswrapper[4780]: I0219 09:45:13.749875 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6v8gc" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.202517 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f8ppv"] Feb 19 09:45:15 crc kubenswrapper[4780]: E0219 09:45:15.202914 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a944e3-b129-4cc4-9792-e07bb079f89c" containerName="mariadb-account-create-update" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.202933 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a944e3-b129-4cc4-9792-e07bb079f89c" containerName="mariadb-account-create-update" Feb 19 09:45:15 crc kubenswrapper[4780]: E0219 09:45:15.202958 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562e7a0b-d308-4e21-9164-8ef5ff574af9" containerName="mariadb-database-create" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.202967 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="562e7a0b-d308-4e21-9164-8ef5ff574af9" containerName="mariadb-database-create" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.203227 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="562e7a0b-d308-4e21-9164-8ef5ff574af9" containerName="mariadb-database-create" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.203249 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a944e3-b129-4cc4-9792-e07bb079f89c" containerName="mariadb-account-create-update" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.203879 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.206248 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.207498 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztnqd" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.207771 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.207931 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.223212 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f8ppv"] Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.339937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.340018 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6px\" (UniqueName: \"kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.340161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.442250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.442313 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn6px\" (UniqueName: \"kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.442397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.448248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.452184 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.467010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn6px\" (UniqueName: \"kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px\") pod \"keystone-db-sync-f8ppv\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:15 crc kubenswrapper[4780]: I0219 09:45:15.527713 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:16 crc kubenswrapper[4780]: I0219 09:45:16.035303 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f8ppv"] Feb 19 09:45:16 crc kubenswrapper[4780]: I0219 09:45:16.781270 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f8ppv" event={"ID":"f3a54176-4984-4c7b-b0e1-f424d7cbd298","Type":"ContainerStarted","Data":"a62cddb3fd3e65899b6d0f40592f0fdef51aed70bbfd1c6252e59c88a70bda68"} Feb 19 09:45:16 crc kubenswrapper[4780]: I0219 09:45:16.781615 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f8ppv" event={"ID":"f3a54176-4984-4c7b-b0e1-f424d7cbd298","Type":"ContainerStarted","Data":"30b5ead5d2006bdc92540fad0d95fae57a2b2c94778763e391eddb5d41e15027"} Feb 19 09:45:16 crc kubenswrapper[4780]: I0219 09:45:16.807776 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f8ppv" podStartSLOduration=1.807758078 podStartE2EDuration="1.807758078s" podCreationTimestamp="2026-02-19 09:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:16.805056703 +0000 UTC m=+5059.548714162" watchObservedRunningTime="2026-02-19 09:45:16.807758078 +0000 UTC m=+5059.551415527" Feb 19 09:45:17 crc kubenswrapper[4780]: I0219 09:45:17.788700 4780 generic.go:334] "Generic (PLEG): container finished" podID="f3a54176-4984-4c7b-b0e1-f424d7cbd298" containerID="a62cddb3fd3e65899b6d0f40592f0fdef51aed70bbfd1c6252e59c88a70bda68" exitCode=0 Feb 19 09:45:17 crc kubenswrapper[4780]: I0219 09:45:17.788802 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f8ppv" event={"ID":"f3a54176-4984-4c7b-b0e1-f424d7cbd298","Type":"ContainerDied","Data":"a62cddb3fd3e65899b6d0f40592f0fdef51aed70bbfd1c6252e59c88a70bda68"} Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.167862 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.327614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data\") pod \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.327818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle\") pod \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.327853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn6px\" (UniqueName: \"kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px\") pod \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\" (UID: \"f3a54176-4984-4c7b-b0e1-f424d7cbd298\") " Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.333618 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px" (OuterVolumeSpecName: "kube-api-access-mn6px") pod "f3a54176-4984-4c7b-b0e1-f424d7cbd298" (UID: "f3a54176-4984-4c7b-b0e1-f424d7cbd298"). InnerVolumeSpecName "kube-api-access-mn6px". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.363922 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3a54176-4984-4c7b-b0e1-f424d7cbd298" (UID: "f3a54176-4984-4c7b-b0e1-f424d7cbd298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.375755 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data" (OuterVolumeSpecName: "config-data") pod "f3a54176-4984-4c7b-b0e1-f424d7cbd298" (UID: "f3a54176-4984-4c7b-b0e1-f424d7cbd298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.430077 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.430169 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn6px\" (UniqueName: \"kubernetes.io/projected/f3a54176-4984-4c7b-b0e1-f424d7cbd298-kube-api-access-mn6px\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.430187 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a54176-4984-4c7b-b0e1-f424d7cbd298-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.806467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f8ppv" event={"ID":"f3a54176-4984-4c7b-b0e1-f424d7cbd298","Type":"ContainerDied","Data":"30b5ead5d2006bdc92540fad0d95fae57a2b2c94778763e391eddb5d41e15027"} Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.806808 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b5ead5d2006bdc92540fad0d95fae57a2b2c94778763e391eddb5d41e15027" Feb 19 09:45:19 crc kubenswrapper[4780]: I0219 09:45:19.806666 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f8ppv" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.079930 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:45:20 crc kubenswrapper[4780]: E0219 09:45:20.081399 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a54176-4984-4c7b-b0e1-f424d7cbd298" containerName="keystone-db-sync" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.081427 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a54176-4984-4c7b-b0e1-f424d7cbd298" containerName="keystone-db-sync" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.081657 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a54176-4984-4c7b-b0e1-f424d7cbd298" containerName="keystone-db-sync" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.082751 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.105787 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.175188 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tgm2j"] Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.177203 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.181280 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.181496 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.181603 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztnqd" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.181714 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.181843 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.183056 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tgm2j"] Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.244434 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.244481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2kd\" (UniqueName: \"kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.244513 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.244565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.244600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.345790 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.345868 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.345888 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrsz\" (UniqueName: \"kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.345907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346072 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346235 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346396 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2kd\" (UniqueName: \"kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346814 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.346822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.347166 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.347169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.366002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2kd\" (UniqueName: \"kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd\") pod \"dnsmasq-dns-5757586b9-dfw5j\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.401096 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.447612 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.447936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.447955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrsz\" (UniqueName: \"kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.447972 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.447996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.448015 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.453729 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.453877 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.453841 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.454017 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.454378 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.464917 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrsz\" (UniqueName: \"kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz\") pod \"keystone-bootstrap-tgm2j\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.504842 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:20 crc kubenswrapper[4780]: I0219 09:45:20.864876 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:45:20 crc kubenswrapper[4780]: W0219 09:45:20.868487 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc05ab1e_973b_4fdc_a0d2_8da967c5f88c.slice/crio-c747f84710744d97fb8d2b03e6daadd4c2005f085b0b5664be0d0417fbd8eed1 WatchSource:0}: Error finding container c747f84710744d97fb8d2b03e6daadd4c2005f085b0b5664be0d0417fbd8eed1: Status 404 returned error can't find the container with id c747f84710744d97fb8d2b03e6daadd4c2005f085b0b5664be0d0417fbd8eed1 Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.010394 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tgm2j"] Feb 19 09:45:21 crc kubenswrapper[4780]: W0219 09:45:21.013363 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b137173_b77d_4659_b2fd_ea223f845be8.slice/crio-f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052 WatchSource:0}: Error finding container f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052: Status 404 returned error can't find the container with id f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052 Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.825522 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgm2j" event={"ID":"7b137173-b77d-4659-b2fd-ea223f845be8","Type":"ContainerStarted","Data":"9c94caff25a36c45edac07f3b6711757ea87257ee87816d7a6b6d09c0b189f5b"} Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.825922 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgm2j" event={"ID":"7b137173-b77d-4659-b2fd-ea223f845be8","Type":"ContainerStarted","Data":"f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052"} Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.827203 4780 generic.go:334] "Generic (PLEG): container finished" podID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerID="cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679" exitCode=0 Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.827273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" event={"ID":"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c","Type":"ContainerDied","Data":"cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679"} Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.827318 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" event={"ID":"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c","Type":"ContainerStarted","Data":"c747f84710744d97fb8d2b03e6daadd4c2005f085b0b5664be0d0417fbd8eed1"} Feb 19 09:45:21 crc kubenswrapper[4780]: I0219 09:45:21.856598 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tgm2j" podStartSLOduration=1.856575147 podStartE2EDuration="1.856575147s" podCreationTimestamp="2026-02-19 09:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:21.848477619 +0000 UTC m=+5064.592135118" watchObservedRunningTime="2026-02-19 09:45:21.856575147 +0000 UTC m=+5064.600232606" Feb 19 09:45:22 crc kubenswrapper[4780]: I0219 09:45:22.836970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" event={"ID":"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c","Type":"ContainerStarted","Data":"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db"} Feb 19 09:45:22 crc kubenswrapper[4780]: I0219 09:45:22.837267 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:22 crc kubenswrapper[4780]: I0219 09:45:22.872196 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" podStartSLOduration=2.872175326 podStartE2EDuration="2.872175326s" podCreationTimestamp="2026-02-19 09:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:22.868873275 +0000 UTC m=+5065.612530734" watchObservedRunningTime="2026-02-19 09:45:22.872175326 +0000 UTC m=+5065.615832785" Feb 19 09:45:24 crc kubenswrapper[4780]: I0219 09:45:24.498337 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 09:45:24 crc kubenswrapper[4780]: I0219 09:45:24.851539 4780 generic.go:334] "Generic (PLEG): container finished" podID="7b137173-b77d-4659-b2fd-ea223f845be8" containerID="9c94caff25a36c45edac07f3b6711757ea87257ee87816d7a6b6d09c0b189f5b" exitCode=0 Feb 19 09:45:24 crc kubenswrapper[4780]: I0219 09:45:24.851624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgm2j" event={"ID":"7b137173-b77d-4659-b2fd-ea223f845be8","Type":"ContainerDied","Data":"9c94caff25a36c45edac07f3b6711757ea87257ee87816d7a6b6d09c0b189f5b"} Feb 19 09:45:24 crc kubenswrapper[4780]: I0219 09:45:24.938292 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:45:24 crc kubenswrapper[4780]: E0219 09:45:24.938548 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.174224 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.348857 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.348986 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.349022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrsz\" (UniqueName: \"kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.349969 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.350070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.350112 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data\") pod \"7b137173-b77d-4659-b2fd-ea223f845be8\" (UID: \"7b137173-b77d-4659-b2fd-ea223f845be8\") " Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.358412 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts" (OuterVolumeSpecName: "scripts") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.359388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.361330 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.364345 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz" (OuterVolumeSpecName: "kube-api-access-jtrsz") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "kube-api-access-jtrsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.389499 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data" (OuterVolumeSpecName: "config-data") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.390346 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b137173-b77d-4659-b2fd-ea223f845be8" (UID: "7b137173-b77d-4659-b2fd-ea223f845be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452473 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452530 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452548 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452566 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452587 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrsz\" (UniqueName: \"kubernetes.io/projected/7b137173-b77d-4659-b2fd-ea223f845be8-kube-api-access-jtrsz\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.452605 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b137173-b77d-4659-b2fd-ea223f845be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.874910 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgm2j" event={"ID":"7b137173-b77d-4659-b2fd-ea223f845be8","Type":"ContainerDied","Data":"f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052"} Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.874972 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1be072c5e968767c2059a9081b3516999ef4d74b563446b8861215cf201a052" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.875045 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgm2j" Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.977022 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tgm2j"] Feb 19 09:45:26 crc kubenswrapper[4780]: I0219 09:45:26.984487 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tgm2j"] Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.069886 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-54kg5"] Feb 19 09:45:27 crc kubenswrapper[4780]: E0219 09:45:27.071574 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b137173-b77d-4659-b2fd-ea223f845be8" containerName="keystone-bootstrap" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.071606 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b137173-b77d-4659-b2fd-ea223f845be8" containerName="keystone-bootstrap" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.071874 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b137173-b77d-4659-b2fd-ea223f845be8" containerName="keystone-bootstrap" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.072638 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.075986 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.076060 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.077362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztnqd" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.077630 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.077970 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.092957 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-54kg5"] Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.165942 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.166028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.166237 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.166422 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.166501 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kw9\" (UniqueName: \"kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.166737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268465 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kw9\" (UniqueName: \"kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.268560 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.275693 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.276297 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.277417 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.278514 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.284230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.288599 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kw9\" (UniqueName: \"kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9\") pod \"keystone-bootstrap-54kg5\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.388920 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.937410 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-54kg5"] Feb 19 09:45:27 crc kubenswrapper[4780]: W0219 09:45:27.939235 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba05d2b1_a1c5_473a_ac1c_9b60da468ade.slice/crio-3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc WatchSource:0}: Error finding container 3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc: Status 404 returned error can't find the container with id 3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc Feb 19 09:45:27 crc kubenswrapper[4780]: I0219 09:45:27.956095 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b137173-b77d-4659-b2fd-ea223f845be8" path="/var/lib/kubelet/pods/7b137173-b77d-4659-b2fd-ea223f845be8/volumes" Feb 19 09:45:28 crc kubenswrapper[4780]: I0219 09:45:28.894916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-54kg5" event={"ID":"ba05d2b1-a1c5-473a-ac1c-9b60da468ade","Type":"ContainerStarted","Data":"7bbbdc68804af2d2db47fbb2ff8d5ac71409d0bf2b86edd6dbd3b2f564218811"} Feb 19 09:45:28 crc kubenswrapper[4780]: I0219 09:45:28.895465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-54kg5" event={"ID":"ba05d2b1-a1c5-473a-ac1c-9b60da468ade","Type":"ContainerStarted","Data":"3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc"} Feb 19 09:45:28 crc kubenswrapper[4780]: I0219 09:45:28.927375 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-54kg5" podStartSLOduration=1.927350397 podStartE2EDuration="1.927350397s" podCreationTimestamp="2026-02-19 09:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:28.916555484 +0000 UTC m=+5071.660212973" watchObservedRunningTime="2026-02-19 09:45:28.927350397 +0000 UTC m=+5071.671007846" Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.403447 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.484259 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.484667 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="dnsmasq-dns" containerID="cri-o://ba04419b9adec605d045d549d9ef6588b94921edabf7e28695cd8158b13dac31" gracePeriod=10 Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.912610 4780 generic.go:334] "Generic (PLEG): container finished" podID="ba05d2b1-a1c5-473a-ac1c-9b60da468ade" containerID="7bbbdc68804af2d2db47fbb2ff8d5ac71409d0bf2b86edd6dbd3b2f564218811" exitCode=0 Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.912910 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-54kg5" event={"ID":"ba05d2b1-a1c5-473a-ac1c-9b60da468ade","Type":"ContainerDied","Data":"7bbbdc68804af2d2db47fbb2ff8d5ac71409d0bf2b86edd6dbd3b2f564218811"} Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.915162 4780 generic.go:334] "Generic (PLEG): container finished" podID="8885b48d-867b-4e74-820b-d0fd765b4006" containerID="ba04419b9adec605d045d549d9ef6588b94921edabf7e28695cd8158b13dac31" exitCode=0 Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.915206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerDied","Data":"ba04419b9adec605d045d549d9ef6588b94921edabf7e28695cd8158b13dac31"} Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.915231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" event={"ID":"8885b48d-867b-4e74-820b-d0fd765b4006","Type":"ContainerDied","Data":"e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802"} Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.915243 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e949ee865963dcffdf4d03d1a0032a0ae1ed2142f1b7cb553383de51092a3802" Feb 19 09:45:30 crc kubenswrapper[4780]: I0219 09:45:30.950934 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.042483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hb9l\" (UniqueName: \"kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l\") pod \"8885b48d-867b-4e74-820b-d0fd765b4006\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.042620 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb\") pod \"8885b48d-867b-4e74-820b-d0fd765b4006\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.042687 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc\") pod \"8885b48d-867b-4e74-820b-d0fd765b4006\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.042812 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config\") pod \"8885b48d-867b-4e74-820b-d0fd765b4006\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.042852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb\") pod \"8885b48d-867b-4e74-820b-d0fd765b4006\" (UID: \"8885b48d-867b-4e74-820b-d0fd765b4006\") " Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.059223 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l" (OuterVolumeSpecName: "kube-api-access-9hb9l") pod "8885b48d-867b-4e74-820b-d0fd765b4006" (UID: "8885b48d-867b-4e74-820b-d0fd765b4006"). InnerVolumeSpecName "kube-api-access-9hb9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.084055 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8885b48d-867b-4e74-820b-d0fd765b4006" (UID: "8885b48d-867b-4e74-820b-d0fd765b4006"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.085637 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config" (OuterVolumeSpecName: "config") pod "8885b48d-867b-4e74-820b-d0fd765b4006" (UID: "8885b48d-867b-4e74-820b-d0fd765b4006"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.091596 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8885b48d-867b-4e74-820b-d0fd765b4006" (UID: "8885b48d-867b-4e74-820b-d0fd765b4006"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.093617 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8885b48d-867b-4e74-820b-d0fd765b4006" (UID: "8885b48d-867b-4e74-820b-d0fd765b4006"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.145937 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.145971 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.145999 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.146008 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8885b48d-867b-4e74-820b-d0fd765b4006-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.146019 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hb9l\" (UniqueName: \"kubernetes.io/projected/8885b48d-867b-4e74-820b-d0fd765b4006-kube-api-access-9hb9l\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.923724 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cb6fc89-fbcvt" Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.980187 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:45:31 crc kubenswrapper[4780]: I0219 09:45:31.986346 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55cb6fc89-fbcvt"] Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.352554 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469041 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469091 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469108 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469187 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kw9\" (UniqueName: \"kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.469284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys\") pod \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\" (UID: \"ba05d2b1-a1c5-473a-ac1c-9b60da468ade\") " Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.473700 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.474770 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.474875 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9" (OuterVolumeSpecName: "kube-api-access-z2kw9") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "kube-api-access-z2kw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.475894 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts" (OuterVolumeSpecName: "scripts") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.489656 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data" (OuterVolumeSpecName: "config-data") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.501468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba05d2b1-a1c5-473a-ac1c-9b60da468ade" (UID: "ba05d2b1-a1c5-473a-ac1c-9b60da468ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.571963 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.572282 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.572366 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.572442 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2kw9\" (UniqueName: \"kubernetes.io/projected/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-kube-api-access-z2kw9\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.572513 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.572594 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba05d2b1-a1c5-473a-ac1c-9b60da468ade-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.941338 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-54kg5" event={"ID":"ba05d2b1-a1c5-473a-ac1c-9b60da468ade","Type":"ContainerDied","Data":"3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc"} Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.941418 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9cd8ee30ce579782e5e545c82c2b18f66ed6872476dbab9530deb771671acc" Feb 19 09:45:32 crc kubenswrapper[4780]: I0219 09:45:32.941416 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-54kg5" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.145376 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c45fc85b7-6d97x"] Feb 19 09:45:33 crc kubenswrapper[4780]: E0219 09:45:33.145850 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="dnsmasq-dns" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.145881 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="dnsmasq-dns" Feb 19 09:45:33 crc kubenswrapper[4780]: E0219 09:45:33.145919 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba05d2b1-a1c5-473a-ac1c-9b60da468ade" containerName="keystone-bootstrap" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.145934 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba05d2b1-a1c5-473a-ac1c-9b60da468ade" containerName="keystone-bootstrap" Feb 19 09:45:33 crc kubenswrapper[4780]: E0219 09:45:33.145967 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="init" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.145979 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="init" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.146281 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba05d2b1-a1c5-473a-ac1c-9b60da468ade" containerName="keystone-bootstrap" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.146300 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" containerName="dnsmasq-dns" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.147169 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.149750 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.150681 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.150692 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztnqd" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.150823 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.165998 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c45fc85b7-6d97x"] Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-combined-ca-bundle\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-259hw\" (UniqueName: \"kubernetes.io/projected/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-kube-api-access-259hw\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294377 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-credential-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-config-data\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-fernet-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.294796 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-scripts\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.395898 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-credential-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.396004 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-config-data\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.396039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-fernet-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.396090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-scripts\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.396154 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-combined-ca-bundle\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.396206 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-259hw\" (UniqueName: \"kubernetes.io/projected/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-kube-api-access-259hw\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.400949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-scripts\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.401852 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-fernet-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.402626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-config-data\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.402704 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-combined-ca-bundle\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.415633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-credential-keys\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.424544 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-259hw\" (UniqueName: \"kubernetes.io/projected/5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3-kube-api-access-259hw\") pod \"keystone-6c45fc85b7-6d97x\" (UID: \"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3\") " pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.500665 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.784618 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c45fc85b7-6d97x"] Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.949066 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8885b48d-867b-4e74-820b-d0fd765b4006" path="/var/lib/kubelet/pods/8885b48d-867b-4e74-820b-d0fd765b4006/volumes" Feb 19 09:45:33 crc kubenswrapper[4780]: I0219 09:45:33.953843 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c45fc85b7-6d97x" event={"ID":"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3","Type":"ContainerStarted","Data":"f009dd9414867fce19e031495f53dbdb86863facf40ad912d2d475bcddc5e7f9"} Feb 19 09:45:34 crc kubenswrapper[4780]: I0219 09:45:34.968002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c45fc85b7-6d97x" event={"ID":"5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3","Type":"ContainerStarted","Data":"eb007c6d03cf7f4a6d4735ef975d68fb8bed69a64e46aaabb63e524c76b4eca1"} Feb 19 09:45:34 crc kubenswrapper[4780]: I0219 09:45:34.968368 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:45:34 crc kubenswrapper[4780]: I0219 09:45:34.995321 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6c45fc85b7-6d97x" podStartSLOduration=1.995296322 podStartE2EDuration="1.995296322s" podCreationTimestamp="2026-02-19 09:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:34.98949599 +0000 UTC m=+5077.733153459" watchObservedRunningTime="2026-02-19 09:45:34.995296322 +0000 UTC m=+5077.738953791" Feb 19 09:45:37 crc kubenswrapper[4780]: I0219 09:45:37.951279 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:45:37 crc kubenswrapper[4780]: E0219 09:45:37.952364 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:45:48 crc kubenswrapper[4780]: I0219 09:45:48.938650 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:45:48 crc kubenswrapper[4780]: E0219 09:45:48.939904 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:45:59 crc kubenswrapper[4780]: I0219 09:45:59.939530 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:45:59 crc kubenswrapper[4780]: E0219 09:45:59.940307 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:46:04 crc kubenswrapper[4780]: I0219 09:46:04.909930 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6c45fc85b7-6d97x" Feb 19 09:46:05 crc kubenswrapper[4780]: I0219 09:46:05.117661 4780 scope.go:117] "RemoveContainer" containerID="2b1efe93d2b2279b1ea2ec4e9cd075dd6b94cd6877f0422f936c61c7723049aa" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.527347 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.529550 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.532894 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.532894 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.536544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-m8bhz" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.551569 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.623411 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzm6n\" (UniqueName: \"kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.623458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.623489 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.725460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzm6n\" (UniqueName: \"kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.725538 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.725600 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.726783 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.738728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.747648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzm6n\" (UniqueName: \"kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n\") pod \"openstackclient\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " pod="openstack/openstackclient" Feb 19 09:46:09 crc kubenswrapper[4780]: I0219 09:46:09.863776 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 09:46:10 crc kubenswrapper[4780]: I0219 09:46:10.387719 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 09:46:10 crc kubenswrapper[4780]: I0219 09:46:10.594734 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b807c707-a369-4e3a-bfc1-0264f1bcf289","Type":"ContainerStarted","Data":"fa561b7eceff1048752459654dc4ea5fa063dff45ae0ae4e42f3de647a85e78a"} Feb 19 09:46:10 crc kubenswrapper[4780]: I0219 09:46:10.594791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b807c707-a369-4e3a-bfc1-0264f1bcf289","Type":"ContainerStarted","Data":"4a8b65f3ba57ad4baccb95442ae114e4290fb280bd83f237c0eaa689e3f2bf3d"} Feb 19 09:46:10 crc kubenswrapper[4780]: I0219 09:46:10.620976 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.620955283 podStartE2EDuration="1.620955283s" podCreationTimestamp="2026-02-19 09:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:10.611601919 +0000 UTC m=+5113.355259398" watchObservedRunningTime="2026-02-19 09:46:10.620955283 +0000 UTC m=+5113.364612742" Feb 19 09:46:14 crc kubenswrapper[4780]: I0219 09:46:14.939495 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:46:14 crc kubenswrapper[4780]: E0219 09:46:14.941081 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:46:28 crc kubenswrapper[4780]: I0219 09:46:28.938084 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:46:28 crc kubenswrapper[4780]: E0219 09:46:28.939442 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:46:41 crc kubenswrapper[4780]: I0219 09:46:41.941857 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:46:41 crc kubenswrapper[4780]: E0219 09:46:41.943164 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:46:53 crc kubenswrapper[4780]: I0219 09:46:53.938704 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:46:53 crc kubenswrapper[4780]: E0219 09:46:53.939340 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.423579 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.427337 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.431394 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.492788 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.492846 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2px\" (UniqueName: \"kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.492878 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.594095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.594221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2px\" (UniqueName: \"kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.594251 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.594671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.594585 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.614602 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2px\" (UniqueName: \"kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px\") pod \"community-operators-mk98b\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:55 crc kubenswrapper[4780]: I0219 09:46:55.760355 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:46:56 crc kubenswrapper[4780]: I0219 09:46:56.255514 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:46:56 crc kubenswrapper[4780]: W0219 09:46:56.265115 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb9f117_a572_47da_b451_ccd3ddd89147.slice/crio-896c0d8ad75c0dfcb1cb348351eec20a834c044acb6a4fd91298df7260f4b147 WatchSource:0}: Error finding container 896c0d8ad75c0dfcb1cb348351eec20a834c044acb6a4fd91298df7260f4b147: Status 404 returned error can't find the container with id 896c0d8ad75c0dfcb1cb348351eec20a834c044acb6a4fd91298df7260f4b147 Feb 19 09:46:57 crc kubenswrapper[4780]: I0219 09:46:57.044517 4780 generic.go:334] "Generic (PLEG): container finished" podID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerID="c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43" exitCode=0 Feb 19 09:46:57 crc kubenswrapper[4780]: I0219 09:46:57.044811 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerDied","Data":"c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43"} Feb 19 09:46:57 crc kubenswrapper[4780]: I0219 09:46:57.044838 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerStarted","Data":"896c0d8ad75c0dfcb1cb348351eec20a834c044acb6a4fd91298df7260f4b147"} Feb 19 09:46:58 crc kubenswrapper[4780]: I0219 09:46:58.057478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerStarted","Data":"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a"} Feb 19 09:46:59 crc kubenswrapper[4780]: I0219 09:46:59.066373 4780 generic.go:334] "Generic (PLEG): container finished" podID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerID="8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a" exitCode=0 Feb 19 09:46:59 crc kubenswrapper[4780]: I0219 09:46:59.066424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerDied","Data":"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a"} Feb 19 09:47:00 crc kubenswrapper[4780]: I0219 09:47:00.077376 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerStarted","Data":"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d"} Feb 19 09:47:00 crc kubenswrapper[4780]: I0219 09:47:00.099222 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk98b" podStartSLOduration=2.700462001 podStartE2EDuration="5.099200797s" podCreationTimestamp="2026-02-19 09:46:55 +0000 UTC" firstStartedPulling="2026-02-19 09:46:57.046571077 +0000 UTC m=+5159.790228536" lastFinishedPulling="2026-02-19 09:46:59.445309873 +0000 UTC m=+5162.188967332" observedRunningTime="2026-02-19 09:47:00.093785936 +0000 UTC m=+5162.837443385" watchObservedRunningTime="2026-02-19 09:47:00.099200797 +0000 UTC m=+5162.842858266" Feb 19 09:47:02 crc kubenswrapper[4780]: E0219 09:47:02.173459 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:41918->38.102.83.103:37621: write tcp 38.102.83.103:41918->38.102.83.103:37621: write: broken pipe Feb 19 09:47:05 crc kubenswrapper[4780]: I0219 09:47:05.761394 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:05 crc kubenswrapper[4780]: I0219 09:47:05.763262 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:05 crc kubenswrapper[4780]: I0219 09:47:05.816735 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:05 crc kubenswrapper[4780]: I0219 09:47:05.938172 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:47:05 crc kubenswrapper[4780]: E0219 09:47:05.938496 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:47:06 crc kubenswrapper[4780]: I0219 09:47:06.224279 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:06 crc kubenswrapper[4780]: I0219 09:47:06.296493 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.164596 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk98b" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="registry-server" containerID="cri-o://c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d" gracePeriod=2 Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.724583 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.891398 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content\") pod \"9cb9f117-a572-47da-b451-ccd3ddd89147\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.891452 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities\") pod \"9cb9f117-a572-47da-b451-ccd3ddd89147\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.891559 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2px\" (UniqueName: \"kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px\") pod \"9cb9f117-a572-47da-b451-ccd3ddd89147\" (UID: \"9cb9f117-a572-47da-b451-ccd3ddd89147\") " Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.892927 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities" (OuterVolumeSpecName: "utilities") pod "9cb9f117-a572-47da-b451-ccd3ddd89147" (UID: "9cb9f117-a572-47da-b451-ccd3ddd89147"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.899751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px" (OuterVolumeSpecName: "kube-api-access-zt2px") pod "9cb9f117-a572-47da-b451-ccd3ddd89147" (UID: "9cb9f117-a572-47da-b451-ccd3ddd89147"). InnerVolumeSpecName "kube-api-access-zt2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.993451 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:08 crc kubenswrapper[4780]: I0219 09:47:08.993481 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2px\" (UniqueName: \"kubernetes.io/projected/9cb9f117-a572-47da-b451-ccd3ddd89147-kube-api-access-zt2px\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.029708 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cb9f117-a572-47da-b451-ccd3ddd89147" (UID: "9cb9f117-a572-47da-b451-ccd3ddd89147"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.095389 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb9f117-a572-47da-b451-ccd3ddd89147-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.176903 4780 generic.go:334] "Generic (PLEG): container finished" podID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerID="c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d" exitCode=0 Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.176960 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerDied","Data":"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d"} Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.177004 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk98b" event={"ID":"9cb9f117-a572-47da-b451-ccd3ddd89147","Type":"ContainerDied","Data":"896c0d8ad75c0dfcb1cb348351eec20a834c044acb6a4fd91298df7260f4b147"} Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.177022 4780 scope.go:117] "RemoveContainer" containerID="c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.177060 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk98b" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.223519 4780 scope.go:117] "RemoveContainer" containerID="8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.246989 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.257346 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk98b"] Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.264812 4780 scope.go:117] "RemoveContainer" containerID="c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.346257 4780 scope.go:117] "RemoveContainer" containerID="c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d" Feb 19 09:47:09 crc kubenswrapper[4780]: E0219 09:47:09.346992 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d\": container with ID starting with c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d not found: ID does not exist" containerID="c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.347037 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d"} err="failed to get container status \"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d\": rpc error: code = NotFound desc = could not find container \"c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d\": container with ID starting with c41052e8872e6aba9d04454c7e10a33df9bcd6d3e9f158fa59bbd6ee38f7588d not found: ID does not exist" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.347081 4780 scope.go:117] "RemoveContainer" containerID="8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a" Feb 19 09:47:09 crc kubenswrapper[4780]: E0219 09:47:09.347483 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a\": container with ID starting with 8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a not found: ID does not exist" containerID="8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.347516 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a"} err="failed to get container status \"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a\": rpc error: code = NotFound desc = could not find container \"8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a\": container with ID starting with 8a06e2df01f49bba21021f792e76e9415059c6032e636f6f20b50ad7f524218a not found: ID does not exist" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.347539 4780 scope.go:117] "RemoveContainer" containerID="c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43" Feb 19 09:47:09 crc kubenswrapper[4780]: E0219 09:47:09.347984 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43\": container with ID starting with c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43 not found: ID does not exist" containerID="c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.348070 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43"} err="failed to get container status \"c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43\": rpc error: code = NotFound desc = could not find container \"c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43\": container with ID starting with c7ace897dd1ca6d5b627c382cc3c1938e9278ce63beda12374425f3c31df7e43 not found: ID does not exist" Feb 19 09:47:09 crc kubenswrapper[4780]: I0219 09:47:09.952240 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" path="/var/lib/kubelet/pods/9cb9f117-a572-47da-b451-ccd3ddd89147/volumes" Feb 19 09:47:17 crc kubenswrapper[4780]: I0219 09:47:17.950441 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:47:17 crc kubenswrapper[4780]: E0219 09:47:17.960202 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:47:21 crc kubenswrapper[4780]: E0219 09:47:21.153096 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:48298->38.102.83.103:37621: write tcp 38.102.83.103:48298->38.102.83.103:37621: write: connection reset by peer Feb 19 09:47:30 crc kubenswrapper[4780]: I0219 09:47:30.938514 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:47:30 crc kubenswrapper[4780]: E0219 09:47:30.939309 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:47:43 crc kubenswrapper[4780]: I0219 09:47:43.937988 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:47:43 crc kubenswrapper[4780]: E0219 09:47:43.938905 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.668262 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sb9tk"] Feb 19 09:47:54 crc kubenswrapper[4780]: E0219 09:47:54.669188 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="registry-server" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.669204 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="registry-server" Feb 19 09:47:54 crc kubenswrapper[4780]: E0219 09:47:54.669227 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="extract-utilities" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.669237 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="extract-utilities" Feb 19 09:47:54 crc kubenswrapper[4780]: E0219 09:47:54.669260 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="extract-content" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.669269 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="extract-content" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.669498 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb9f117-a572-47da-b451-ccd3ddd89147" containerName="registry-server" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.670285 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.680780 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8813-account-create-update-8q5mj"] Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.682382 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.685599 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.693599 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8813-account-create-update-8q5mj"] Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.701670 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sb9tk"] Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.758091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl5nm\" (UniqueName: \"kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.758170 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.859288 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnczs\" (UniqueName: \"kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.859517 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.859603 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl5nm\" (UniqueName: \"kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.859637 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.860550 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.895425 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl5nm\" (UniqueName: \"kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm\") pod \"barbican-db-create-sb9tk\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.939552 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:47:54 crc kubenswrapper[4780]: E0219 09:47:54.939928 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.960803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnczs\" (UniqueName: \"kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.961337 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.962753 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.983657 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnczs\" (UniqueName: \"kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs\") pod \"barbican-8813-account-create-update-8q5mj\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:54 crc kubenswrapper[4780]: I0219 09:47:54.996871 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:55 crc kubenswrapper[4780]: I0219 09:47:55.010323 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:55 crc kubenswrapper[4780]: I0219 09:47:55.497341 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sb9tk"] Feb 19 09:47:55 crc kubenswrapper[4780]: I0219 09:47:55.506158 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8813-account-create-update-8q5mj"] Feb 19 09:47:55 crc kubenswrapper[4780]: I0219 09:47:55.609387 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sb9tk" event={"ID":"006cb814-2256-49c7-b617-c55e753dbc73","Type":"ContainerStarted","Data":"ea2b8d9ac0ce62c84a12c2f7ecc93935b5f6e9ff08a628cde5c48f4b7ee199a8"} Feb 19 09:47:55 crc kubenswrapper[4780]: I0219 09:47:55.610805 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8813-account-create-update-8q5mj" event={"ID":"3e0955c1-e301-443e-a782-0755ce6f6399","Type":"ContainerStarted","Data":"b003fc8b943607dac5600bbfd84a2b701dae5803d4951c048b3dcc0133837aa8"} Feb 19 09:47:56 crc kubenswrapper[4780]: I0219 09:47:56.623897 4780 generic.go:334] "Generic (PLEG): container finished" podID="3e0955c1-e301-443e-a782-0755ce6f6399" containerID="b6b8405c2b5b8f18202e418c033ca717976b8af494417653cb566d3708b75ad2" exitCode=0 Feb 19 09:47:56 crc kubenswrapper[4780]: I0219 09:47:56.624419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8813-account-create-update-8q5mj" event={"ID":"3e0955c1-e301-443e-a782-0755ce6f6399","Type":"ContainerDied","Data":"b6b8405c2b5b8f18202e418c033ca717976b8af494417653cb566d3708b75ad2"} Feb 19 09:47:56 crc kubenswrapper[4780]: I0219 09:47:56.627910 4780 generic.go:334] "Generic (PLEG): container finished" podID="006cb814-2256-49c7-b617-c55e753dbc73" containerID="ed38482fe7bffa533fb880806206dbfd5522cf4c7c045570d5bc1866893d86d3" exitCode=0 Feb 19 09:47:56 crc kubenswrapper[4780]: I0219 09:47:56.627953 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sb9tk" event={"ID":"006cb814-2256-49c7-b617-c55e753dbc73","Type":"ContainerDied","Data":"ed38482fe7bffa533fb880806206dbfd5522cf4c7c045570d5bc1866893d86d3"} Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.068148 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.075706 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.132853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl5nm\" (UniqueName: \"kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm\") pod \"006cb814-2256-49c7-b617-c55e753dbc73\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.132904 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnczs\" (UniqueName: \"kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs\") pod \"3e0955c1-e301-443e-a782-0755ce6f6399\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.132959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts\") pod \"3e0955c1-e301-443e-a782-0755ce6f6399\" (UID: \"3e0955c1-e301-443e-a782-0755ce6f6399\") " Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.133030 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts\") pod \"006cb814-2256-49c7-b617-c55e753dbc73\" (UID: \"006cb814-2256-49c7-b617-c55e753dbc73\") " Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.133808 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "006cb814-2256-49c7-b617-c55e753dbc73" (UID: "006cb814-2256-49c7-b617-c55e753dbc73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.133810 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e0955c1-e301-443e-a782-0755ce6f6399" (UID: "3e0955c1-e301-443e-a782-0755ce6f6399"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.138440 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs" (OuterVolumeSpecName: "kube-api-access-rnczs") pod "3e0955c1-e301-443e-a782-0755ce6f6399" (UID: "3e0955c1-e301-443e-a782-0755ce6f6399"). InnerVolumeSpecName "kube-api-access-rnczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.139372 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm" (OuterVolumeSpecName: "kube-api-access-nl5nm") pod "006cb814-2256-49c7-b617-c55e753dbc73" (UID: "006cb814-2256-49c7-b617-c55e753dbc73"). InnerVolumeSpecName "kube-api-access-nl5nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.235063 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl5nm\" (UniqueName: \"kubernetes.io/projected/006cb814-2256-49c7-b617-c55e753dbc73-kube-api-access-nl5nm\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.235167 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnczs\" (UniqueName: \"kubernetes.io/projected/3e0955c1-e301-443e-a782-0755ce6f6399-kube-api-access-rnczs\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.235189 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e0955c1-e301-443e-a782-0755ce6f6399-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.235208 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006cb814-2256-49c7-b617-c55e753dbc73-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.651208 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sb9tk" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.651232 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sb9tk" event={"ID":"006cb814-2256-49c7-b617-c55e753dbc73","Type":"ContainerDied","Data":"ea2b8d9ac0ce62c84a12c2f7ecc93935b5f6e9ff08a628cde5c48f4b7ee199a8"} Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.651793 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2b8d9ac0ce62c84a12c2f7ecc93935b5f6e9ff08a628cde5c48f4b7ee199a8" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.653277 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8813-account-create-update-8q5mj" event={"ID":"3e0955c1-e301-443e-a782-0755ce6f6399","Type":"ContainerDied","Data":"b003fc8b943607dac5600bbfd84a2b701dae5803d4951c048b3dcc0133837aa8"} Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.653321 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b003fc8b943607dac5600bbfd84a2b701dae5803d4951c048b3dcc0133837aa8" Feb 19 09:47:58 crc kubenswrapper[4780]: I0219 09:47:58.653326 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8813-account-create-update-8q5mj" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.930406 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tsv5n"] Feb 19 09:47:59 crc kubenswrapper[4780]: E0219 09:47:59.930721 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006cb814-2256-49c7-b617-c55e753dbc73" containerName="mariadb-database-create" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.930734 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="006cb814-2256-49c7-b617-c55e753dbc73" containerName="mariadb-database-create" Feb 19 09:47:59 crc kubenswrapper[4780]: E0219 09:47:59.930747 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0955c1-e301-443e-a782-0755ce6f6399" containerName="mariadb-account-create-update" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.930753 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0955c1-e301-443e-a782-0755ce6f6399" containerName="mariadb-account-create-update" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.930949 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0955c1-e301-443e-a782-0755ce6f6399" containerName="mariadb-account-create-update" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.930968 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="006cb814-2256-49c7-b617-c55e753dbc73" containerName="mariadb-database-create" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.931494 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.933781 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.933796 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ccj5x" Feb 19 09:47:59 crc kubenswrapper[4780]: I0219 09:47:59.953658 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tsv5n"] Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.066680 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.066738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvkl\" (UniqueName: \"kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.066767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.168839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.168927 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvkl\" (UniqueName: \"kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.168987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.175266 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.175787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.204711 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvkl\" (UniqueName: \"kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl\") pod \"barbican-db-sync-tsv5n\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.252464 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.519845 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tsv5n"] Feb 19 09:48:00 crc kubenswrapper[4780]: I0219 09:48:00.672056 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsv5n" event={"ID":"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58","Type":"ContainerStarted","Data":"24fa2502de34d49d746c2872ab2c358297292908a4c74ac1492ba95431074be5"} Feb 19 09:48:01 crc kubenswrapper[4780]: I0219 09:48:01.686153 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsv5n" event={"ID":"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58","Type":"ContainerStarted","Data":"9ce7b324a24a61cec3fefd79864f2f6ed08a1250ff0579ef5c195f0e218f2d4b"} Feb 19 09:48:01 crc kubenswrapper[4780]: I0219 09:48:01.710529 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tsv5n" podStartSLOduration=2.710513069 podStartE2EDuration="2.710513069s" podCreationTimestamp="2026-02-19 09:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:01.706750111 +0000 UTC m=+5224.450407560" watchObservedRunningTime="2026-02-19 09:48:01.710513069 +0000 UTC m=+5224.454170518" Feb 19 09:48:03 crc kubenswrapper[4780]: I0219 09:48:03.713606 4780 generic.go:334] "Generic (PLEG): container finished" podID="f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" containerID="9ce7b324a24a61cec3fefd79864f2f6ed08a1250ff0579ef5c195f0e218f2d4b" exitCode=0 Feb 19 09:48:03 crc kubenswrapper[4780]: I0219 09:48:03.713717 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsv5n" event={"ID":"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58","Type":"ContainerDied","Data":"9ce7b324a24a61cec3fefd79864f2f6ed08a1250ff0579ef5c195f0e218f2d4b"} Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.097816 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.181268 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data\") pod \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.181403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvkl\" (UniqueName: \"kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl\") pod \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.181486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle\") pod \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\" (UID: \"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58\") " Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.186310 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" (UID: "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.186510 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl" (OuterVolumeSpecName: "kube-api-access-xqvkl") pod "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" (UID: "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58"). InnerVolumeSpecName "kube-api-access-xqvkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.207621 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" (UID: "f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.283392 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.283457 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvkl\" (UniqueName: \"kubernetes.io/projected/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-kube-api-access-xqvkl\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.283474 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.735340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsv5n" event={"ID":"f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58","Type":"ContainerDied","Data":"24fa2502de34d49d746c2872ab2c358297292908a4c74ac1492ba95431074be5"} Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.735408 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fa2502de34d49d746c2872ab2c358297292908a4c74ac1492ba95431074be5" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.735451 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsv5n" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.973385 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f79c4dc7-dmlmt"] Feb 19 09:48:05 crc kubenswrapper[4780]: E0219 09:48:05.975515 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" containerName="barbican-db-sync" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.975540 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" containerName="barbican-db-sync" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.975863 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" containerName="barbican-db-sync" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.977110 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.986486 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.986697 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.986828 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ccj5x" Feb 19 09:48:05 crc kubenswrapper[4780]: I0219 09:48:05.990309 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f79c4dc7-dmlmt"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.038760 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5746675c94-xbv7n"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.041715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.044790 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.059222 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5746675c94-xbv7n"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.097549 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.097615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data-custom\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.097639 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598ae77e-d2eb-4858-9443-dc5bc697e68a-logs\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.097693 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrdj\" (UniqueName: \"kubernetes.io/projected/598ae77e-d2eb-4858-9443-dc5bc697e68a-kube-api-access-xjrdj\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.097713 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-combined-ca-bundle\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.103400 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.109032 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.124317 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.200937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data-custom\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.200980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9969382-7625-4a4e-b6df-765bc78bec0c-logs\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201075 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-combined-ca-bundle\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201094 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data-custom\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201112 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj5sn\" (UniqueName: \"kubernetes.io/projected/e9969382-7625-4a4e-b6df-765bc78bec0c-kube-api-access-fj5sn\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598ae77e-d2eb-4858-9443-dc5bc697e68a-logs\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrdj\" (UniqueName: \"kubernetes.io/projected/598ae77e-d2eb-4858-9443-dc5bc697e68a-kube-api-access-xjrdj\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.201273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-combined-ca-bundle\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.202487 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598ae77e-d2eb-4858-9443-dc5bc697e68a-logs\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.206102 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-combined-ca-bundle\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.206815 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data-custom\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.218357 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598ae77e-d2eb-4858-9443-dc5bc697e68a-config-data\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.230671 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c4d644ff4-gq2np"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.232441 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.237194 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.237579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrdj\" (UniqueName: \"kubernetes.io/projected/598ae77e-d2eb-4858-9443-dc5bc697e68a-kube-api-access-xjrdj\") pod \"barbican-worker-6f79c4dc7-dmlmt\" (UID: \"598ae77e-d2eb-4858-9443-dc5bc697e68a\") " pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.247387 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c4d644ff4-gq2np"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302709 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9969382-7625-4a4e-b6df-765bc78bec0c-logs\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302764 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302791 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-combined-ca-bundle\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302820 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj5sn\" (UniqueName: \"kubernetes.io/projected/e9969382-7625-4a4e-b6df-765bc78bec0c-kube-api-access-fj5sn\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302845 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmlr\" (UniqueName: \"kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302876 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302958 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.302980 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data-custom\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.303000 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.303887 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9969382-7625-4a4e-b6df-765bc78bec0c-logs\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.308707 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data-custom\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.309070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-combined-ca-bundle\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.315536 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9969382-7625-4a4e-b6df-765bc78bec0c-config-data\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.319633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj5sn\" (UniqueName: \"kubernetes.io/projected/e9969382-7625-4a4e-b6df-765bc78bec0c-kube-api-access-fj5sn\") pod \"barbican-keystone-listener-5746675c94-xbv7n\" (UID: \"e9969382-7625-4a4e-b6df-765bc78bec0c\") " pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.333742 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.369748 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.413766 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-combined-ca-bundle\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.413862 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.413908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmlr\" (UniqueName: \"kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.413959 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data-custom\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.413985 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.414004 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d73ea82-95d2-49e5-b2a9-974c7e440807-logs\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.414024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.414041 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzsh\" (UniqueName: \"kubernetes.io/projected/8d73ea82-95d2-49e5-b2a9-974c7e440807-kube-api-access-pwzsh\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.414211 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.414262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.416042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.418825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.419536 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.419607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.436201 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmlr\" (UniqueName: \"kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr\") pod \"dnsmasq-dns-66845c4585-sssf4\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.516778 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.516870 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-combined-ca-bundle\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.516916 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data-custom\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.516938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d73ea82-95d2-49e5-b2a9-974c7e440807-logs\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.516959 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzsh\" (UniqueName: \"kubernetes.io/projected/8d73ea82-95d2-49e5-b2a9-974c7e440807-kube-api-access-pwzsh\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.517944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d73ea82-95d2-49e5-b2a9-974c7e440807-logs\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.523908 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data-custom\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.524417 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-combined-ca-bundle\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.524433 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d73ea82-95d2-49e5-b2a9-974c7e440807-config-data\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.536018 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzsh\" (UniqueName: \"kubernetes.io/projected/8d73ea82-95d2-49e5-b2a9-974c7e440807-kube-api-access-pwzsh\") pod \"barbican-api-7c4d644ff4-gq2np\" (UID: \"8d73ea82-95d2-49e5-b2a9-974c7e440807\") " pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.609593 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.731623 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.848057 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f79c4dc7-dmlmt"] Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.938619 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:48:06 crc kubenswrapper[4780]: E0219 09:48:06.939210 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:48:06 crc kubenswrapper[4780]: I0219 09:48:06.986458 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5746675c94-xbv7n"] Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.124928 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:07 crc kubenswrapper[4780]: W0219 09:48:07.141818 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ce1b0c4_4bb5_408a_9547_920862f4070a.slice/crio-7979e16c47ff77c79013d8150055ba1f44fd8bed3b7036a544ea44b9041f97a0 WatchSource:0}: Error finding container 7979e16c47ff77c79013d8150055ba1f44fd8bed3b7036a544ea44b9041f97a0: Status 404 returned error can't find the container with id 7979e16c47ff77c79013d8150055ba1f44fd8bed3b7036a544ea44b9041f97a0 Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.143334 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c4d644ff4-gq2np"] Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.755117 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" event={"ID":"e9969382-7625-4a4e-b6df-765bc78bec0c","Type":"ContainerStarted","Data":"bde6c0bbedcd669a1bf62fd52a2b833705da474970d9a7516ae591569b1abbec"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.755617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" event={"ID":"e9969382-7625-4a4e-b6df-765bc78bec0c","Type":"ContainerStarted","Data":"4b48a0f635017559ac811b0d9a33cab61c04dc8377ad4be9a10581ea6b1bea34"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.755639 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" event={"ID":"e9969382-7625-4a4e-b6df-765bc78bec0c","Type":"ContainerStarted","Data":"5ffa8eff8adf817bd4da69a798b636cd786f5e235cced2ea5bffad2f562c39ad"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.758044 4780 generic.go:334] "Generic (PLEG): container finished" podID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerID="315eb0bee7d04de61421b7c213e41af71b142cb4b8b1373ae9277d205a17c180" exitCode=0 Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.758116 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66845c4585-sssf4" event={"ID":"1ce1b0c4-4bb5-408a-9547-920862f4070a","Type":"ContainerDied","Data":"315eb0bee7d04de61421b7c213e41af71b142cb4b8b1373ae9277d205a17c180"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.758165 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66845c4585-sssf4" event={"ID":"1ce1b0c4-4bb5-408a-9547-920862f4070a","Type":"ContainerStarted","Data":"7979e16c47ff77c79013d8150055ba1f44fd8bed3b7036a544ea44b9041f97a0"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.761273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c4d644ff4-gq2np" event={"ID":"8d73ea82-95d2-49e5-b2a9-974c7e440807","Type":"ContainerStarted","Data":"f15f0af1a698ebb8828c0936f881d5b1f81bf4c95cc3b298a0a9a5afc4db6208"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.761362 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c4d644ff4-gq2np" event={"ID":"8d73ea82-95d2-49e5-b2a9-974c7e440807","Type":"ContainerStarted","Data":"4768e97450b4fe84ea3d22268a2aaefd4f3fa33b01d7540acaa953aba3ad9c8f"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.761400 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c4d644ff4-gq2np" event={"ID":"8d73ea82-95d2-49e5-b2a9-974c7e440807","Type":"ContainerStarted","Data":"a2e0f1a2a3dffefb8f44d007c30366b8bf9cd0114ddb84ed3c7b486fde62c83e"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.761685 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.761768 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.763781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" event={"ID":"598ae77e-d2eb-4858-9443-dc5bc697e68a","Type":"ContainerStarted","Data":"2c94bfe9cf97bef0f4e2f473e5a0cf23ac64db506907bbd11c5e7bc2a3efd85a"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.763869 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" event={"ID":"598ae77e-d2eb-4858-9443-dc5bc697e68a","Type":"ContainerStarted","Data":"f10b8904b7c0e99abae91fb07958000c02e7c37c1be0eb1dfc449c978299fa90"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.763946 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" event={"ID":"598ae77e-d2eb-4858-9443-dc5bc697e68a","Type":"ContainerStarted","Data":"2dcee6da0b7ef2bb6ebae5a83787b8b3314748f781fa083650b692e034a4368e"} Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.781032 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5746675c94-xbv7n" podStartSLOduration=1.781010206 podStartE2EDuration="1.781010206s" podCreationTimestamp="2026-02-19 09:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:07.776198331 +0000 UTC m=+5230.519855790" watchObservedRunningTime="2026-02-19 09:48:07.781010206 +0000 UTC m=+5230.524667665" Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.831416 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c4d644ff4-gq2np" podStartSLOduration=1.831397366 podStartE2EDuration="1.831397366s" podCreationTimestamp="2026-02-19 09:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:07.808448989 +0000 UTC m=+5230.552106428" watchObservedRunningTime="2026-02-19 09:48:07.831397366 +0000 UTC m=+5230.575054825" Feb 19 09:48:07 crc kubenswrapper[4780]: I0219 09:48:07.856869 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f79c4dc7-dmlmt" podStartSLOduration=2.856844738 podStartE2EDuration="2.856844738s" podCreationTimestamp="2026-02-19 09:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:07.852461644 +0000 UTC m=+5230.596119103" watchObservedRunningTime="2026-02-19 09:48:07.856844738 +0000 UTC m=+5230.600502187" Feb 19 09:48:08 crc kubenswrapper[4780]: I0219 09:48:08.781547 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66845c4585-sssf4" event={"ID":"1ce1b0c4-4bb5-408a-9547-920862f4070a","Type":"ContainerStarted","Data":"72fb7343aea1d6a538a58b42aaa05aaf1da4e311e03a2d05994491bad68563cc"} Feb 19 09:48:08 crc kubenswrapper[4780]: I0219 09:48:08.822638 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66845c4585-sssf4" podStartSLOduration=2.822600072 podStartE2EDuration="2.822600072s" podCreationTimestamp="2026-02-19 09:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:08.804667595 +0000 UTC m=+5231.548325064" watchObservedRunningTime="2026-02-19 09:48:08.822600072 +0000 UTC m=+5231.566257561" Feb 19 09:48:09 crc kubenswrapper[4780]: I0219 09:48:09.798139 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:10 crc kubenswrapper[4780]: I0219 09:48:10.078296 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8lwtc"] Feb 19 09:48:10 crc kubenswrapper[4780]: I0219 09:48:10.084347 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8lwtc"] Feb 19 09:48:11 crc kubenswrapper[4780]: I0219 09:48:11.976996 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4db9bd-1d71-4d42-a786-e5e7c0098d4c" path="/var/lib/kubelet/pods/da4db9bd-1d71-4d42-a786-e5e7c0098d4c/volumes" Feb 19 09:48:16 crc kubenswrapper[4780]: I0219 09:48:16.733302 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:16 crc kubenswrapper[4780]: I0219 09:48:16.808534 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:48:16 crc kubenswrapper[4780]: I0219 09:48:16.809152 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="dnsmasq-dns" containerID="cri-o://a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db" gracePeriod=10 Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.342107 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.425766 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config\") pod \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.426070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc\") pod \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.426943 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb\") pod \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.427110 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz2kd\" (UniqueName: \"kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd\") pod \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.427372 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb\") pod \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\" (UID: \"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c\") " Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.441502 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd" (OuterVolumeSpecName: "kube-api-access-kz2kd") pod "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" (UID: "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c"). InnerVolumeSpecName "kube-api-access-kz2kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.480501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" (UID: "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.487787 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" (UID: "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.494260 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" (UID: "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.507190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config" (OuterVolumeSpecName: "config") pod "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" (UID: "cc05ab1e-973b-4fdc-a0d2-8da967c5f88c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.529274 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.529305 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz2kd\" (UniqueName: \"kubernetes.io/projected/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-kube-api-access-kz2kd\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.529318 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.529329 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.529339 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.893221 4780 generic.go:334] "Generic (PLEG): container finished" podID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerID="a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db" exitCode=0 Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.893283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" event={"ID":"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c","Type":"ContainerDied","Data":"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db"} Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.893331 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" event={"ID":"cc05ab1e-973b-4fdc-a0d2-8da967c5f88c","Type":"ContainerDied","Data":"c747f84710744d97fb8d2b03e6daadd4c2005f085b0b5664be0d0417fbd8eed1"} Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.893355 4780 scope.go:117] "RemoveContainer" containerID="a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.894589 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5757586b9-dfw5j" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.921285 4780 scope.go:117] "RemoveContainer" containerID="cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.960911 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.966701 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5757586b9-dfw5j"] Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.983949 4780 scope.go:117] "RemoveContainer" containerID="a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db" Feb 19 09:48:17 crc kubenswrapper[4780]: E0219 09:48:17.984486 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db\": container with ID starting with a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db not found: ID does not exist" containerID="a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.984590 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db"} err="failed to get container status \"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db\": rpc error: code = NotFound desc = could not find container \"a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db\": container with ID starting with a0f0eba02cb8e9d098c2b4956812fce649e5ffb2423d71d6a32122e3799c86db not found: ID does not exist" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.984666 4780 scope.go:117] "RemoveContainer" containerID="cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679" Feb 19 09:48:17 crc kubenswrapper[4780]: E0219 09:48:17.984970 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679\": container with ID starting with cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679 not found: ID does not exist" containerID="cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679" Feb 19 09:48:17 crc kubenswrapper[4780]: I0219 09:48:17.985000 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679"} err="failed to get container status \"cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679\": rpc error: code = NotFound desc = could not find container \"cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679\": container with ID starting with cc370e291455e9b4686c2e61c3c302db7ec600b48eb14773f9f499d7c5190679 not found: ID does not exist" Feb 19 09:48:18 crc kubenswrapper[4780]: I0219 09:48:18.090987 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:18 crc kubenswrapper[4780]: I0219 09:48:18.189729 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c4d644ff4-gq2np" Feb 19 09:48:19 crc kubenswrapper[4780]: I0219 09:48:19.943870 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:48:19 crc kubenswrapper[4780]: E0219 09:48:19.944406 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:48:19 crc kubenswrapper[4780]: I0219 09:48:19.957239 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" path="/var/lib/kubelet/pods/cc05ab1e-973b-4fdc-a0d2-8da967c5f88c/volumes" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.306857 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9fmpp"] Feb 19 09:48:30 crc kubenswrapper[4780]: E0219 09:48:30.308650 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="init" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.308725 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="init" Feb 19 09:48:30 crc kubenswrapper[4780]: E0219 09:48:30.308801 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="dnsmasq-dns" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.308853 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="dnsmasq-dns" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.309069 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc05ab1e-973b-4fdc-a0d2-8da967c5f88c" containerName="dnsmasq-dns" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.309725 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.317090 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9fmpp"] Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.414903 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1caa-account-create-update-wwwpz"] Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.416073 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.417862 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.423669 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1caa-account-create-update-wwwpz"] Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.481695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.481769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z28k\" (UniqueName: \"kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.584865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.585539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.585780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzvm\" (UniqueName: \"kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.585952 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z28k\" (UniqueName: \"kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.586654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.611228 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z28k\" (UniqueName: \"kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k\") pod \"neutron-db-create-9fmpp\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.637653 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.689011 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzvm\" (UniqueName: \"kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.689899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.690935 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.706442 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzvm\" (UniqueName: \"kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm\") pod \"neutron-1caa-account-create-update-wwwpz\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:30 crc kubenswrapper[4780]: I0219 09:48:30.732204 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:31 crc kubenswrapper[4780]: I0219 09:48:31.090883 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9fmpp"] Feb 19 09:48:31 crc kubenswrapper[4780]: W0219 09:48:31.219967 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2855a35c_55c0_4a23_bac0_98c18c0ce711.slice/crio-d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f WatchSource:0}: Error finding container d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f: Status 404 returned error can't find the container with id d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f Feb 19 09:48:31 crc kubenswrapper[4780]: I0219 09:48:31.222678 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1caa-account-create-update-wwwpz"] Feb 19 09:48:31 crc kubenswrapper[4780]: I0219 09:48:31.938678 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:48:31 crc kubenswrapper[4780]: E0219 09:48:31.939450 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.036738 4780 generic.go:334] "Generic (PLEG): container finished" podID="2855a35c-55c0-4a23-bac0-98c18c0ce711" containerID="799f74d95d5991913652ea81021ab96507f07c10baac54021134fce9c580d343" exitCode=0 Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.036837 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1caa-account-create-update-wwwpz" event={"ID":"2855a35c-55c0-4a23-bac0-98c18c0ce711","Type":"ContainerDied","Data":"799f74d95d5991913652ea81021ab96507f07c10baac54021134fce9c580d343"} Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.036897 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1caa-account-create-update-wwwpz" event={"ID":"2855a35c-55c0-4a23-bac0-98c18c0ce711","Type":"ContainerStarted","Data":"d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f"} Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.042428 4780 generic.go:334] "Generic (PLEG): container finished" podID="95e1e1cc-2317-4d00-a0e4-c9b9ce697969" containerID="18841dfce0c18a3f9f096a4ce4b6f06cb53b8cab457ba1b4376e84e2bcc66f6f" exitCode=0 Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.042490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9fmpp" event={"ID":"95e1e1cc-2317-4d00-a0e4-c9b9ce697969","Type":"ContainerDied","Data":"18841dfce0c18a3f9f096a4ce4b6f06cb53b8cab457ba1b4376e84e2bcc66f6f"} Feb 19 09:48:32 crc kubenswrapper[4780]: I0219 09:48:32.042579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9fmpp" event={"ID":"95e1e1cc-2317-4d00-a0e4-c9b9ce697969","Type":"ContainerStarted","Data":"ea9ed82c1af1dce6b41517e8b491e975bd59fa5a2141a7b26f4ecda307377226"} Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.590261 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.597048 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.745970 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z28k\" (UniqueName: \"kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k\") pod \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.746042 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts\") pod \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\" (UID: \"95e1e1cc-2317-4d00-a0e4-c9b9ce697969\") " Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.746077 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzvm\" (UniqueName: \"kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm\") pod \"2855a35c-55c0-4a23-bac0-98c18c0ce711\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.746342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts\") pod \"2855a35c-55c0-4a23-bac0-98c18c0ce711\" (UID: \"2855a35c-55c0-4a23-bac0-98c18c0ce711\") " Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.746992 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95e1e1cc-2317-4d00-a0e4-c9b9ce697969" (UID: "95e1e1cc-2317-4d00-a0e4-c9b9ce697969"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.747635 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2855a35c-55c0-4a23-bac0-98c18c0ce711" (UID: "2855a35c-55c0-4a23-bac0-98c18c0ce711"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.752939 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm" (OuterVolumeSpecName: "kube-api-access-6mzvm") pod "2855a35c-55c0-4a23-bac0-98c18c0ce711" (UID: "2855a35c-55c0-4a23-bac0-98c18c0ce711"). InnerVolumeSpecName "kube-api-access-6mzvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.753366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k" (OuterVolumeSpecName: "kube-api-access-7z28k") pod "95e1e1cc-2317-4d00-a0e4-c9b9ce697969" (UID: "95e1e1cc-2317-4d00-a0e4-c9b9ce697969"). InnerVolumeSpecName "kube-api-access-7z28k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.848392 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2855a35c-55c0-4a23-bac0-98c18c0ce711-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.848430 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z28k\" (UniqueName: \"kubernetes.io/projected/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-kube-api-access-7z28k\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.848443 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e1e1cc-2317-4d00-a0e4-c9b9ce697969-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:33 crc kubenswrapper[4780]: I0219 09:48:33.848453 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzvm\" (UniqueName: \"kubernetes.io/projected/2855a35c-55c0-4a23-bac0-98c18c0ce711-kube-api-access-6mzvm\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.069965 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1caa-account-create-update-wwwpz" event={"ID":"2855a35c-55c0-4a23-bac0-98c18c0ce711","Type":"ContainerDied","Data":"d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f"} Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.070011 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1caa-account-create-update-wwwpz" Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.070032 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48895849e9ef21417c7773e5f544b65c9e7ce8662077942bedc2770c4f64b7f" Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.072226 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9fmpp" event={"ID":"95e1e1cc-2317-4d00-a0e4-c9b9ce697969","Type":"ContainerDied","Data":"ea9ed82c1af1dce6b41517e8b491e975bd59fa5a2141a7b26f4ecda307377226"} Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.072265 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9ed82c1af1dce6b41517e8b491e975bd59fa5a2141a7b26f4ecda307377226" Feb 19 09:48:34 crc kubenswrapper[4780]: I0219 09:48:34.072335 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9fmpp" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.736119 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pwhdh"] Feb 19 09:48:35 crc kubenswrapper[4780]: E0219 09:48:35.737189 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2855a35c-55c0-4a23-bac0-98c18c0ce711" containerName="mariadb-account-create-update" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.737211 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2855a35c-55c0-4a23-bac0-98c18c0ce711" containerName="mariadb-account-create-update" Feb 19 09:48:35 crc kubenswrapper[4780]: E0219 09:48:35.737251 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e1e1cc-2317-4d00-a0e4-c9b9ce697969" containerName="mariadb-database-create" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.737261 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e1e1cc-2317-4d00-a0e4-c9b9ce697969" containerName="mariadb-database-create" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.737496 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e1e1cc-2317-4d00-a0e4-c9b9ce697969" containerName="mariadb-database-create" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.737524 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2855a35c-55c0-4a23-bac0-98c18c0ce711" containerName="mariadb-account-create-update" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.738985 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.742340 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rwtwn" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.742691 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.742707 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.756058 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pwhdh"] Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.890935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.891153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.891369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.993317 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.993395 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:35 crc kubenswrapper[4780]: I0219 09:48:35.993460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:36 crc kubenswrapper[4780]: I0219 09:48:36.000358 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:36 crc kubenswrapper[4780]: I0219 09:48:36.018106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:36 crc kubenswrapper[4780]: I0219 09:48:36.019307 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk\") pod \"neutron-db-sync-pwhdh\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:36 crc kubenswrapper[4780]: I0219 09:48:36.083874 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:36 crc kubenswrapper[4780]: I0219 09:48:36.599738 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pwhdh"] Feb 19 09:48:37 crc kubenswrapper[4780]: I0219 09:48:37.101901 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pwhdh" event={"ID":"f3b19ede-98bd-4bd7-9f80-060def069830","Type":"ContainerStarted","Data":"b12850f7509ef0cb80cf15498a032d5b3a7fcc82477ef416958ed373d432b037"} Feb 19 09:48:37 crc kubenswrapper[4780]: I0219 09:48:37.101950 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pwhdh" event={"ID":"f3b19ede-98bd-4bd7-9f80-060def069830","Type":"ContainerStarted","Data":"f26e24c382e805371aeb7c6c5701d9fa62659410287b2d3d2574036dbc1964c0"} Feb 19 09:48:37 crc kubenswrapper[4780]: I0219 09:48:37.129635 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pwhdh" podStartSLOduration=2.129609238 podStartE2EDuration="2.129609238s" podCreationTimestamp="2026-02-19 09:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:37.123506789 +0000 UTC m=+5259.867164238" watchObservedRunningTime="2026-02-19 09:48:37.129609238 +0000 UTC m=+5259.873266687" Feb 19 09:48:42 crc kubenswrapper[4780]: I0219 09:48:42.154691 4780 generic.go:334] "Generic (PLEG): container finished" podID="f3b19ede-98bd-4bd7-9f80-060def069830" containerID="b12850f7509ef0cb80cf15498a032d5b3a7fcc82477ef416958ed373d432b037" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4780]: I0219 09:48:42.154829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pwhdh" event={"ID":"f3b19ede-98bd-4bd7-9f80-060def069830","Type":"ContainerDied","Data":"b12850f7509ef0cb80cf15498a032d5b3a7fcc82477ef416958ed373d432b037"} Feb 19 09:48:42 crc kubenswrapper[4780]: I0219 09:48:42.938945 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:48:42 crc kubenswrapper[4780]: E0219 09:48:42.939587 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.579669 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.658710 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config\") pod \"f3b19ede-98bd-4bd7-9f80-060def069830\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.658851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk\") pod \"f3b19ede-98bd-4bd7-9f80-060def069830\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.658924 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle\") pod \"f3b19ede-98bd-4bd7-9f80-060def069830\" (UID: \"f3b19ede-98bd-4bd7-9f80-060def069830\") " Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.668801 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk" (OuterVolumeSpecName: "kube-api-access-tcfqk") pod "f3b19ede-98bd-4bd7-9f80-060def069830" (UID: "f3b19ede-98bd-4bd7-9f80-060def069830"). InnerVolumeSpecName "kube-api-access-tcfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.696281 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config" (OuterVolumeSpecName: "config") pod "f3b19ede-98bd-4bd7-9f80-060def069830" (UID: "f3b19ede-98bd-4bd7-9f80-060def069830"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.698299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3b19ede-98bd-4bd7-9f80-060def069830" (UID: "f3b19ede-98bd-4bd7-9f80-060def069830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.761456 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.761508 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/f3b19ede-98bd-4bd7-9f80-060def069830-kube-api-access-tcfqk\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:43 crc kubenswrapper[4780]: I0219 09:48:43.761529 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3b19ede-98bd-4bd7-9f80-060def069830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.181118 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pwhdh" event={"ID":"f3b19ede-98bd-4bd7-9f80-060def069830","Type":"ContainerDied","Data":"f26e24c382e805371aeb7c6c5701d9fa62659410287b2d3d2574036dbc1964c0"} Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.181483 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26e24c382e805371aeb7c6c5701d9fa62659410287b2d3d2574036dbc1964c0" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.181249 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pwhdh" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.452220 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:48:44 crc kubenswrapper[4780]: E0219 09:48:44.452642 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b19ede-98bd-4bd7-9f80-060def069830" containerName="neutron-db-sync" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.452658 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b19ede-98bd-4bd7-9f80-060def069830" containerName="neutron-db-sync" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.452818 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b19ede-98bd-4bd7-9f80-060def069830" containerName="neutron-db-sync" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.453707 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.468267 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.514326 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86ddc9fb9f-mwj2f"] Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.516116 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.525111 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rwtwn" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.525231 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.525182 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.532655 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86ddc9fb9f-mwj2f"] Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580092 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580212 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580239 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-httpd-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gft\" (UniqueName: \"kubernetes.io/projected/399eb34a-c13c-4454-849b-81645c2d6d44-kube-api-access-d5gft\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-combined-ca-bundle\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580684 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvpl\" (UniqueName: \"kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.580719 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvpl\" (UniqueName: \"kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682086 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682183 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682246 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682266 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-httpd-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gft\" (UniqueName: \"kubernetes.io/projected/399eb34a-c13c-4454-849b-81645c2d6d44-kube-api-access-d5gft\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.682333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-combined-ca-bundle\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.683793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.684635 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.685172 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.689836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.691946 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-combined-ca-bundle\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.708865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-httpd-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.709533 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/399eb34a-c13c-4454-849b-81645c2d6d44-config\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.717489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gft\" (UniqueName: \"kubernetes.io/projected/399eb34a-c13c-4454-849b-81645c2d6d44-kube-api-access-d5gft\") pod \"neutron-86ddc9fb9f-mwj2f\" (UID: \"399eb34a-c13c-4454-849b-81645c2d6d44\") " pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.724892 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvpl\" (UniqueName: \"kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl\") pod \"dnsmasq-dns-77d8d6f48f-jrdbv\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.791975 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:44 crc kubenswrapper[4780]: I0219 09:48:44.839852 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:45 crc kubenswrapper[4780]: W0219 09:48:45.358699 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b14c5b_fb81_41da_87c2_b7d56c0db67a.slice/crio-533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72 WatchSource:0}: Error finding container 533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72: Status 404 returned error can't find the container with id 533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72 Feb 19 09:48:45 crc kubenswrapper[4780]: I0219 09:48:45.358847 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:48:45 crc kubenswrapper[4780]: I0219 09:48:45.375860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86ddc9fb9f-mwj2f"] Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.197859 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ddc9fb9f-mwj2f" event={"ID":"399eb34a-c13c-4454-849b-81645c2d6d44","Type":"ContainerStarted","Data":"db8c58fbdd30b1d3a01f1ff0589bfd31cb3607abdec1910d648899f13448b42a"} Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.198224 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.198243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ddc9fb9f-mwj2f" event={"ID":"399eb34a-c13c-4454-849b-81645c2d6d44","Type":"ContainerStarted","Data":"411ba8ff8f2460794bd31b92e12df841ea7f118666298f63cc487be32fe7039a"} Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.198301 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ddc9fb9f-mwj2f" event={"ID":"399eb34a-c13c-4454-849b-81645c2d6d44","Type":"ContainerStarted","Data":"55c77c3d3eccd4a67b220eb7004f0799560ce60286fc35ea5bd420a4554a4877"} Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.199364 4780 generic.go:334] "Generic (PLEG): container finished" podID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerID="53d9c8d019593ca7fe6037c39d01d5d9b297c16b3f2f65eca288de60884544a8" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.199393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" event={"ID":"e0b14c5b-fb81-41da-87c2-b7d56c0db67a","Type":"ContainerDied","Data":"53d9c8d019593ca7fe6037c39d01d5d9b297c16b3f2f65eca288de60884544a8"} Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.199409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" event={"ID":"e0b14c5b-fb81-41da-87c2-b7d56c0db67a","Type":"ContainerStarted","Data":"533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72"} Feb 19 09:48:46 crc kubenswrapper[4780]: I0219 09:48:46.231398 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86ddc9fb9f-mwj2f" podStartSLOduration=2.231366149 podStartE2EDuration="2.231366149s" podCreationTimestamp="2026-02-19 09:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:46.225725853 +0000 UTC m=+5268.969383312" watchObservedRunningTime="2026-02-19 09:48:46.231366149 +0000 UTC m=+5268.975023608" Feb 19 09:48:47 crc kubenswrapper[4780]: I0219 09:48:47.210051 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" event={"ID":"e0b14c5b-fb81-41da-87c2-b7d56c0db67a","Type":"ContainerStarted","Data":"71774290b0d877a0fbc1f5b6be2a5b0040f42c15dc98f83069a6b45edf660201"} Feb 19 09:48:47 crc kubenswrapper[4780]: I0219 09:48:47.210462 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:47 crc kubenswrapper[4780]: I0219 09:48:47.232209 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" podStartSLOduration=3.232194635 podStartE2EDuration="3.232194635s" podCreationTimestamp="2026-02-19 09:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:47.230868181 +0000 UTC m=+5269.974525620" watchObservedRunningTime="2026-02-19 09:48:47.232194635 +0000 UTC m=+5269.975852084" Feb 19 09:48:54 crc kubenswrapper[4780]: I0219 09:48:54.793413 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:48:54 crc kubenswrapper[4780]: I0219 09:48:54.857538 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:54 crc kubenswrapper[4780]: I0219 09:48:54.857960 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66845c4585-sssf4" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="dnsmasq-dns" containerID="cri-o://72fb7343aea1d6a538a58b42aaa05aaf1da4e311e03a2d05994491bad68563cc" gracePeriod=10 Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.294518 4780 generic.go:334] "Generic (PLEG): container finished" podID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerID="72fb7343aea1d6a538a58b42aaa05aaf1da4e311e03a2d05994491bad68563cc" exitCode=0 Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.294991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66845c4585-sssf4" event={"ID":"1ce1b0c4-4bb5-408a-9547-920862f4070a","Type":"ContainerDied","Data":"72fb7343aea1d6a538a58b42aaa05aaf1da4e311e03a2d05994491bad68563cc"} Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.457991 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.589569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config\") pod \"1ce1b0c4-4bb5-408a-9547-920862f4070a\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.589652 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxmlr\" (UniqueName: \"kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr\") pod \"1ce1b0c4-4bb5-408a-9547-920862f4070a\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.589743 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb\") pod \"1ce1b0c4-4bb5-408a-9547-920862f4070a\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.589790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb\") pod \"1ce1b0c4-4bb5-408a-9547-920862f4070a\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.589820 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc\") pod \"1ce1b0c4-4bb5-408a-9547-920862f4070a\" (UID: \"1ce1b0c4-4bb5-408a-9547-920862f4070a\") " Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.595699 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr" (OuterVolumeSpecName: "kube-api-access-xxmlr") pod "1ce1b0c4-4bb5-408a-9547-920862f4070a" (UID: "1ce1b0c4-4bb5-408a-9547-920862f4070a"). InnerVolumeSpecName "kube-api-access-xxmlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.639643 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config" (OuterVolumeSpecName: "config") pod "1ce1b0c4-4bb5-408a-9547-920862f4070a" (UID: "1ce1b0c4-4bb5-408a-9547-920862f4070a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.642432 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ce1b0c4-4bb5-408a-9547-920862f4070a" (UID: "1ce1b0c4-4bb5-408a-9547-920862f4070a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.643863 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ce1b0c4-4bb5-408a-9547-920862f4070a" (UID: "1ce1b0c4-4bb5-408a-9547-920862f4070a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.655160 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ce1b0c4-4bb5-408a-9547-920862f4070a" (UID: "1ce1b0c4-4bb5-408a-9547-920862f4070a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.693407 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.693441 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxmlr\" (UniqueName: \"kubernetes.io/projected/1ce1b0c4-4bb5-408a-9547-920862f4070a-kube-api-access-xxmlr\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.693463 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.693472 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:55 crc kubenswrapper[4780]: I0219 09:48:55.693482 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce1b0c4-4bb5-408a-9547-920862f4070a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.304889 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66845c4585-sssf4" event={"ID":"1ce1b0c4-4bb5-408a-9547-920862f4070a","Type":"ContainerDied","Data":"7979e16c47ff77c79013d8150055ba1f44fd8bed3b7036a544ea44b9041f97a0"} Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.304952 4780 scope.go:117] "RemoveContainer" containerID="72fb7343aea1d6a538a58b42aaa05aaf1da4e311e03a2d05994491bad68563cc" Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.306523 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66845c4585-sssf4" Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.330244 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.332845 4780 scope.go:117] "RemoveContainer" containerID="315eb0bee7d04de61421b7c213e41af71b142cb4b8b1373ae9277d205a17c180" Feb 19 09:48:56 crc kubenswrapper[4780]: I0219 09:48:56.347629 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66845c4585-sssf4"] Feb 19 09:48:57 crc kubenswrapper[4780]: I0219 09:48:57.944756 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:48:57 crc kubenswrapper[4780]: E0219 09:48:57.945614 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:48:57 crc kubenswrapper[4780]: I0219 09:48:57.952438 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" path="/var/lib/kubelet/pods/1ce1b0c4-4bb5-408a-9547-920862f4070a/volumes" Feb 19 09:49:05 crc kubenswrapper[4780]: I0219 09:49:05.233335 4780 scope.go:117] "RemoveContainer" containerID="0301fda3d2f4c00573561d469bed9f62602e61a25143214f8283d23a011ad855" Feb 19 09:49:10 crc kubenswrapper[4780]: I0219 09:49:10.938588 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:49:11 crc kubenswrapper[4780]: I0219 09:49:11.468467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763"} Feb 19 09:49:14 crc kubenswrapper[4780]: I0219 09:49:14.858387 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86ddc9fb9f-mwj2f" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.360313 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2kmv2"] Feb 19 09:49:22 crc kubenswrapper[4780]: E0219 09:49:22.361332 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="init" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.361353 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="init" Feb 19 09:49:22 crc kubenswrapper[4780]: E0219 09:49:22.361378 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="dnsmasq-dns" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.361386 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="dnsmasq-dns" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.361600 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce1b0c4-4bb5-408a-9547-920862f4070a" containerName="dnsmasq-dns" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.362350 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.374076 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2kmv2"] Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.459742 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-054e-account-create-update-vcjsc"] Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.460923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.462996 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.471876 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-054e-account-create-update-vcjsc"] Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.497414 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.497498 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfks\" (UniqueName: \"kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.599688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.599755 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfks\" (UniqueName: \"kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.599816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdclv\" (UniqueName: \"kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.599893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.600496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.622944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfks\" (UniqueName: \"kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks\") pod \"glance-db-create-2kmv2\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.685470 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.701572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.701687 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdclv\" (UniqueName: \"kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.702798 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.745836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdclv\" (UniqueName: \"kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv\") pod \"glance-054e-account-create-update-vcjsc\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:22 crc kubenswrapper[4780]: I0219 09:49:22.778993 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.287326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2kmv2"] Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.397246 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-054e-account-create-update-vcjsc"] Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.569402 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kmv2" event={"ID":"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e","Type":"ContainerStarted","Data":"913d93ff2bf837ee2b10568ce28d685dd3937c80f338043d48e170e8947499fc"} Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.569461 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kmv2" event={"ID":"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e","Type":"ContainerStarted","Data":"064853d34acd79aa1585e286bd18484ffaab194f97e50f38ef92b75c4872d0d6"} Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.570615 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-054e-account-create-update-vcjsc" event={"ID":"c2271c34-fcda-4312-9e5b-89a96811c0a1","Type":"ContainerStarted","Data":"b969040121fbe604ea496501584b5b2b4b82385b1b063ad607832596c5b81d3a"} Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.570659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-054e-account-create-update-vcjsc" event={"ID":"c2271c34-fcda-4312-9e5b-89a96811c0a1","Type":"ContainerStarted","Data":"a5a6d16a22d707930bdf0278f6291707fe49abc7c3f590db465cb0697eabd578"} Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.612673 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2kmv2" podStartSLOduration=1.612438998 podStartE2EDuration="1.612438998s" podCreationTimestamp="2026-02-19 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:23.58484274 +0000 UTC m=+5306.328500189" watchObservedRunningTime="2026-02-19 09:49:23.612438998 +0000 UTC m=+5306.356096447" Feb 19 09:49:23 crc kubenswrapper[4780]: I0219 09:49:23.615116 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-054e-account-create-update-vcjsc" podStartSLOduration=1.6151083979999998 podStartE2EDuration="1.615108398s" podCreationTimestamp="2026-02-19 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:23.601389111 +0000 UTC m=+5306.345046560" watchObservedRunningTime="2026-02-19 09:49:23.615108398 +0000 UTC m=+5306.358765847" Feb 19 09:49:24 crc kubenswrapper[4780]: I0219 09:49:24.585401 4780 generic.go:334] "Generic (PLEG): container finished" podID="c2271c34-fcda-4312-9e5b-89a96811c0a1" containerID="b969040121fbe604ea496501584b5b2b4b82385b1b063ad607832596c5b81d3a" exitCode=0 Feb 19 09:49:24 crc kubenswrapper[4780]: I0219 09:49:24.585507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-054e-account-create-update-vcjsc" event={"ID":"c2271c34-fcda-4312-9e5b-89a96811c0a1","Type":"ContainerDied","Data":"b969040121fbe604ea496501584b5b2b4b82385b1b063ad607832596c5b81d3a"} Feb 19 09:49:24 crc kubenswrapper[4780]: I0219 09:49:24.588703 4780 generic.go:334] "Generic (PLEG): container finished" podID="d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" containerID="913d93ff2bf837ee2b10568ce28d685dd3937c80f338043d48e170e8947499fc" exitCode=0 Feb 19 09:49:24 crc kubenswrapper[4780]: I0219 09:49:24.588774 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kmv2" event={"ID":"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e","Type":"ContainerDied","Data":"913d93ff2bf837ee2b10568ce28d685dd3937c80f338043d48e170e8947499fc"} Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.014944 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.020095 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.176507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdclv\" (UniqueName: \"kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv\") pod \"c2271c34-fcda-4312-9e5b-89a96811c0a1\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.176568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts\") pod \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.176610 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts\") pod \"c2271c34-fcda-4312-9e5b-89a96811c0a1\" (UID: \"c2271c34-fcda-4312-9e5b-89a96811c0a1\") " Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.176756 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxfks\" (UniqueName: \"kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks\") pod \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\" (UID: \"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e\") " Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.177959 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" (UID: "d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.178016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2271c34-fcda-4312-9e5b-89a96811c0a1" (UID: "c2271c34-fcda-4312-9e5b-89a96811c0a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.190075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks" (OuterVolumeSpecName: "kube-api-access-fxfks") pod "d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" (UID: "d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e"). InnerVolumeSpecName "kube-api-access-fxfks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.190120 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv" (OuterVolumeSpecName: "kube-api-access-bdclv") pod "c2271c34-fcda-4312-9e5b-89a96811c0a1" (UID: "c2271c34-fcda-4312-9e5b-89a96811c0a1"). InnerVolumeSpecName "kube-api-access-bdclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.278237 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxfks\" (UniqueName: \"kubernetes.io/projected/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-kube-api-access-fxfks\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.278274 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdclv\" (UniqueName: \"kubernetes.io/projected/c2271c34-fcda-4312-9e5b-89a96811c0a1-kube-api-access-bdclv\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.278285 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.278294 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2271c34-fcda-4312-9e5b-89a96811c0a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.609008 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-054e-account-create-update-vcjsc" event={"ID":"c2271c34-fcda-4312-9e5b-89a96811c0a1","Type":"ContainerDied","Data":"a5a6d16a22d707930bdf0278f6291707fe49abc7c3f590db465cb0697eabd578"} Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.609089 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-054e-account-create-update-vcjsc" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.609091 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a6d16a22d707930bdf0278f6291707fe49abc7c3f590db465cb0697eabd578" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.611991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kmv2" event={"ID":"d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e","Type":"ContainerDied","Data":"064853d34acd79aa1585e286bd18484ffaab194f97e50f38ef92b75c4872d0d6"} Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.612050 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064853d34acd79aa1585e286bd18484ffaab194f97e50f38ef92b75c4872d0d6" Feb 19 09:49:26 crc kubenswrapper[4780]: I0219 09:49:26.612114 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kmv2" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.652110 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hr7hx"] Feb 19 09:49:27 crc kubenswrapper[4780]: E0219 09:49:27.652790 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2271c34-fcda-4312-9e5b-89a96811c0a1" containerName="mariadb-account-create-update" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.652805 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2271c34-fcda-4312-9e5b-89a96811c0a1" containerName="mariadb-account-create-update" Feb 19 09:49:27 crc kubenswrapper[4780]: E0219 09:49:27.652833 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" containerName="mariadb-database-create" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.652842 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" containerName="mariadb-database-create" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.653036 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2271c34-fcda-4312-9e5b-89a96811c0a1" containerName="mariadb-account-create-update" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.653053 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" containerName="mariadb-database-create" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.653739 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.656414 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b5hpl" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.656997 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.666743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hr7hx"] Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.805323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8j5n\" (UniqueName: \"kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.805679 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.805773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.805813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.907516 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8j5n\" (UniqueName: \"kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.907894 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.908013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.908115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.915269 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.921157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.921640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.927166 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8j5n\" (UniqueName: \"kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n\") pod \"glance-db-sync-hr7hx\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:27 crc kubenswrapper[4780]: I0219 09:49:27.984081 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:28 crc kubenswrapper[4780]: I0219 09:49:28.349719 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hr7hx"] Feb 19 09:49:28 crc kubenswrapper[4780]: W0219 09:49:28.356418 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a4bf86_54ba_4d73_aa81_3f3172bcc365.slice/crio-4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b WatchSource:0}: Error finding container 4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b: Status 404 returned error can't find the container with id 4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b Feb 19 09:49:28 crc kubenswrapper[4780]: I0219 09:49:28.631133 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hr7hx" event={"ID":"c0a4bf86-54ba-4d73-aa81-3f3172bcc365","Type":"ContainerStarted","Data":"4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b"} Feb 19 09:49:29 crc kubenswrapper[4780]: I0219 09:49:29.642817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hr7hx" event={"ID":"c0a4bf86-54ba-4d73-aa81-3f3172bcc365","Type":"ContainerStarted","Data":"99fa089a114406875a6e3ab292d03a1b445c66603e4d8c2bd1a47456c624ffae"} Feb 19 09:49:29 crc kubenswrapper[4780]: I0219 09:49:29.664280 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hr7hx" podStartSLOduration=2.66425687 podStartE2EDuration="2.66425687s" podCreationTimestamp="2026-02-19 09:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:29.658036268 +0000 UTC m=+5312.401693717" watchObservedRunningTime="2026-02-19 09:49:29.66425687 +0000 UTC m=+5312.407914349" Feb 19 09:49:32 crc kubenswrapper[4780]: I0219 09:49:32.675013 4780 generic.go:334] "Generic (PLEG): container finished" podID="c0a4bf86-54ba-4d73-aa81-3f3172bcc365" containerID="99fa089a114406875a6e3ab292d03a1b445c66603e4d8c2bd1a47456c624ffae" exitCode=0 Feb 19 09:49:32 crc kubenswrapper[4780]: I0219 09:49:32.675107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hr7hx" event={"ID":"c0a4bf86-54ba-4d73-aa81-3f3172bcc365","Type":"ContainerDied","Data":"99fa089a114406875a6e3ab292d03a1b445c66603e4d8c2bd1a47456c624ffae"} Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.121814 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.237709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data\") pod \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.238238 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle\") pod \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.238469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8j5n\" (UniqueName: \"kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n\") pod \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.239046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data\") pod \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\" (UID: \"c0a4bf86-54ba-4d73-aa81-3f3172bcc365\") " Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.245335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c0a4bf86-54ba-4d73-aa81-3f3172bcc365" (UID: "c0a4bf86-54ba-4d73-aa81-3f3172bcc365"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.245480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n" (OuterVolumeSpecName: "kube-api-access-l8j5n") pod "c0a4bf86-54ba-4d73-aa81-3f3172bcc365" (UID: "c0a4bf86-54ba-4d73-aa81-3f3172bcc365"). InnerVolumeSpecName "kube-api-access-l8j5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.273757 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0a4bf86-54ba-4d73-aa81-3f3172bcc365" (UID: "c0a4bf86-54ba-4d73-aa81-3f3172bcc365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.289858 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data" (OuterVolumeSpecName: "config-data") pod "c0a4bf86-54ba-4d73-aa81-3f3172bcc365" (UID: "c0a4bf86-54ba-4d73-aa81-3f3172bcc365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.343183 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.343231 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.343244 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.343258 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8j5n\" (UniqueName: \"kubernetes.io/projected/c0a4bf86-54ba-4d73-aa81-3f3172bcc365-kube-api-access-l8j5n\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.702276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hr7hx" event={"ID":"c0a4bf86-54ba-4d73-aa81-3f3172bcc365","Type":"ContainerDied","Data":"4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b"} Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.702323 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b682f519a880cbf74e85e4747722ec8e9f874cbdba4ddf45926ef77a3057e3b" Feb 19 09:49:34 crc kubenswrapper[4780]: I0219 09:49:34.702377 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hr7hx" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.105838 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:35 crc kubenswrapper[4780]: E0219 09:49:35.106242 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a4bf86-54ba-4d73-aa81-3f3172bcc365" containerName="glance-db-sync" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.106260 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a4bf86-54ba-4d73-aa81-3f3172bcc365" containerName="glance-db-sync" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.106419 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a4bf86-54ba-4d73-aa81-3f3172bcc365" containerName="glance-db-sync" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.113415 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.119849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.119862 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.119889 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.120499 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b5hpl" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.121250 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.204026 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.205502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.216475 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260168 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260337 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hsq\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.260412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.361291 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2hsq\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362506 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362566 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362665 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362694 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtf2f\" (UniqueName: \"kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362716 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362751 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362772 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.362797 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.363378 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.363514 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.364118 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.369329 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.369659 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.369708 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.373579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.380520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2hsq\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.381963 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.393562 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.430442 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464422 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464445 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464523 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464542 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4nj\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464609 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtf2f\" (UniqueName: \"kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.464872 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.465008 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.466631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.467278 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.467884 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.468967 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.488959 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtf2f\" (UniqueName: \"kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f\") pod \"dnsmasq-dns-6d98cdb5d5-7hrcm\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.519249 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.566942 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567004 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4nj\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567178 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567245 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567264 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.567557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.570259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.572153 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.573342 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.573556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.573807 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.609728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4nj\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj\") pod \"glance-default-internal-api-0\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.748798 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:35 crc kubenswrapper[4780]: I0219 09:49:35.903722 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.123962 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.171270 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.363385 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.735547 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d72bf52-29ef-424f-a119-77d73311af2c" containerID="619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9" exitCode=0 Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.735715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" event={"ID":"6d72bf52-29ef-424f-a119-77d73311af2c","Type":"ContainerDied","Data":"619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9"} Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.735917 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" event={"ID":"6d72bf52-29ef-424f-a119-77d73311af2c","Type":"ContainerStarted","Data":"ab7e2d5c22eaa1bcb9b8ef586868ab66412bc5013c07d500eb4635157bb42045"} Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.738947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerStarted","Data":"57f601bf22ea7d295d261bf449fb6b531e71adace03a6c6875db68cd8ccced23"} Feb 19 09:49:36 crc kubenswrapper[4780]: I0219 09:49:36.742608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerStarted","Data":"e22f82e506db81bafed344888dd33ea83893975489b18d4ac4d9502be73977d6"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.814939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerStarted","Data":"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.815600 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerStarted","Data":"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.815777 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-log" containerID="cri-o://cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" gracePeriod=30 Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.818216 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-httpd" containerID="cri-o://95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" gracePeriod=30 Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.858667 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" event={"ID":"6d72bf52-29ef-424f-a119-77d73311af2c","Type":"ContainerStarted","Data":"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.859586 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.860611 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.860591618 podStartE2EDuration="2.860591618s" podCreationTimestamp="2026-02-19 09:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:37.855062144 +0000 UTC m=+5320.598719593" watchObservedRunningTime="2026-02-19 09:49:37.860591618 +0000 UTC m=+5320.604249067" Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.868227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerStarted","Data":"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.868266 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerStarted","Data":"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94"} Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.895223 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" podStartSLOduration=2.8951953980000003 podStartE2EDuration="2.895195398s" podCreationTimestamp="2026-02-19 09:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:37.892358204 +0000 UTC m=+5320.636015653" watchObservedRunningTime="2026-02-19 09:49:37.895195398 +0000 UTC m=+5320.638852857" Feb 19 09:49:37 crc kubenswrapper[4780]: I0219 09:49:37.913962 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.9139406450000003 podStartE2EDuration="2.913940645s" podCreationTimestamp="2026-02-19 09:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:37.913570606 +0000 UTC m=+5320.657228055" watchObservedRunningTime="2026-02-19 09:49:37.913940645 +0000 UTC m=+5320.657598094" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.442741 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.531372 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.537057 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539398 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539503 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539673 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.539844 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2hsq\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq\") pod \"fd59fbc4-eb33-4119-9859-fb65b8961526\" (UID: \"fd59fbc4-eb33-4119-9859-fb65b8961526\") " Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.540018 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs" (OuterVolumeSpecName: "logs") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.540143 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.540655 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.540676 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd59fbc4-eb33-4119-9859-fb65b8961526-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.542996 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts" (OuterVolumeSpecName: "scripts") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.543521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph" (OuterVolumeSpecName: "ceph") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.543584 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq" (OuterVolumeSpecName: "kube-api-access-j2hsq") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "kube-api-access-j2hsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.566348 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.585934 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data" (OuterVolumeSpecName: "config-data") pod "fd59fbc4-eb33-4119-9859-fb65b8961526" (UID: "fd59fbc4-eb33-4119-9859-fb65b8961526"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.642737 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2hsq\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-kube-api-access-j2hsq\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.642775 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.642787 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.642798 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd59fbc4-eb33-4119-9859-fb65b8961526-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.642807 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fbc4-eb33-4119-9859-fb65b8961526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879454 4780 generic.go:334] "Generic (PLEG): container finished" podID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerID="95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" exitCode=0 Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879490 4780 generic.go:334] "Generic (PLEG): container finished" podID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerID="cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" exitCode=143 Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879547 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerDied","Data":"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228"} Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879589 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerDied","Data":"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c"} Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd59fbc4-eb33-4119-9859-fb65b8961526","Type":"ContainerDied","Data":"e22f82e506db81bafed344888dd33ea83893975489b18d4ac4d9502be73977d6"} Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.879639 4780 scope.go:117] "RemoveContainer" containerID="95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.914439 4780 scope.go:117] "RemoveContainer" containerID="cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.924640 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.931567 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.952322 4780 scope.go:117] "RemoveContainer" containerID="95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" Feb 19 09:49:38 crc kubenswrapper[4780]: E0219 09:49:38.953427 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228\": container with ID starting with 95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228 not found: ID does not exist" containerID="95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.953499 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228"} err="failed to get container status \"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228\": rpc error: code = NotFound desc = could not find container \"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228\": container with ID starting with 95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228 not found: ID does not exist" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.953532 4780 scope.go:117] "RemoveContainer" containerID="cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" Feb 19 09:49:38 crc kubenswrapper[4780]: E0219 09:49:38.954332 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c\": container with ID starting with cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c not found: ID does not exist" containerID="cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.954360 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c"} err="failed to get container status \"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c\": rpc error: code = NotFound desc = could not find container \"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c\": container with ID starting with cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c not found: ID does not exist" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.954400 4780 scope.go:117] "RemoveContainer" containerID="95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.955737 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228"} err="failed to get container status \"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228\": rpc error: code = NotFound desc = could not find container \"95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228\": container with ID starting with 95eb3bf20c1971097914174b97011a095d67f21e28db0fd363dbf6b807e64228 not found: ID does not exist" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.955764 4780 scope.go:117] "RemoveContainer" containerID="cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.956019 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c"} err="failed to get container status \"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c\": rpc error: code = NotFound desc = could not find container \"cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c\": container with ID starting with cf2dac057d68393bbcc4b3e010350721644d4cf79e91cdb19735f525906b395c not found: ID does not exist" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.957371 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:38 crc kubenswrapper[4780]: E0219 09:49:38.957785 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-httpd" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.957806 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-httpd" Feb 19 09:49:38 crc kubenswrapper[4780]: E0219 09:49:38.957839 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-log" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.957848 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-log" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.958079 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-log" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.958101 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" containerName="glance-httpd" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.959090 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.965468 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 09:49:38 crc kubenswrapper[4780]: I0219 09:49:38.976969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h94md\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050341 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.050413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152330 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152381 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152404 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152424 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152479 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.152576 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h94md\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.153488 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.153604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.157107 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.157996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.158113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.159728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.171002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h94md\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md\") pod \"glance-default-external-api-0\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.284859 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.818742 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:49:39 crc kubenswrapper[4780]: W0219 09:49:39.820263 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc201488a_de5c_4ca4_8354_436bb1e687bf.slice/crio-ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f WatchSource:0}: Error finding container ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f: Status 404 returned error can't find the container with id ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.889617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerStarted","Data":"ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f"} Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.891017 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-log" containerID="cri-o://8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" gracePeriod=30 Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.891153 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-httpd" containerID="cri-o://2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" gracePeriod=30 Feb 19 09:49:39 crc kubenswrapper[4780]: I0219 09:49:39.951397 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd59fbc4-eb33-4119-9859-fb65b8961526" path="/var/lib/kubelet/pods/fd59fbc4-eb33-4119-9859-fb65b8961526/volumes" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.490067 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.612941 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l4nj\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.613524 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.613623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.613740 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.614417 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs" (OuterVolumeSpecName: "logs") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.614977 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.615150 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.615333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run\") pod \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\" (UID: \"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3\") " Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.615890 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.616820 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.616981 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.618477 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj" (OuterVolumeSpecName: "kube-api-access-7l4nj") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "kube-api-access-7l4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.618713 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph" (OuterVolumeSpecName: "ceph") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.619043 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts" (OuterVolumeSpecName: "scripts") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.643934 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.660442 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data" (OuterVolumeSpecName: "config-data") pod "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" (UID: "1fd9e411-4a8d-41a3-96fd-38a0a605c5a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.719148 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l4nj\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-kube-api-access-7l4nj\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.719184 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.719195 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.719204 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.719214 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906813 4780 generic.go:334] "Generic (PLEG): container finished" podID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerID="2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" exitCode=0 Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906852 4780 generic.go:334] "Generic (PLEG): container finished" podID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerID="8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" exitCode=143 Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerDied","Data":"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92"} Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906909 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerDied","Data":"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94"} Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fd9e411-4a8d-41a3-96fd-38a0a605c5a3","Type":"ContainerDied","Data":"57f601bf22ea7d295d261bf449fb6b531e71adace03a6c6875db68cd8ccced23"} Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.906964 4780 scope.go:117] "RemoveContainer" containerID="2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.911144 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerStarted","Data":"3dda3e3be6e83812399203391e538a48d65323c59cdb7f64f6bc9ad745fc0bbd"} Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.933023 4780 scope.go:117] "RemoveContainer" containerID="8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.976693 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.984998 4780 scope.go:117] "RemoveContainer" containerID="2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.985089 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:40 crc kubenswrapper[4780]: E0219 09:49:40.985602 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92\": container with ID starting with 2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92 not found: ID does not exist" containerID="2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.985642 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92"} err="failed to get container status \"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92\": rpc error: code = NotFound desc = could not find container \"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92\": container with ID starting with 2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92 not found: ID does not exist" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.985677 4780 scope.go:117] "RemoveContainer" containerID="8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" Feb 19 09:49:40 crc kubenswrapper[4780]: E0219 09:49:40.986621 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94\": container with ID starting with 8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94 not found: ID does not exist" containerID="8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.986645 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94"} err="failed to get container status \"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94\": rpc error: code = NotFound desc = could not find container \"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94\": container with ID starting with 8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94 not found: ID does not exist" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.986665 4780 scope.go:117] "RemoveContainer" containerID="2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.986885 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92"} err="failed to get container status \"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92\": rpc error: code = NotFound desc = could not find container \"2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92\": container with ID starting with 2cb509fbcf6d249f3860c54dd31790554196f107315d31b30de682cc7e2e1c92 not found: ID does not exist" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.986904 4780 scope.go:117] "RemoveContainer" containerID="8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.987103 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94"} err="failed to get container status \"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94\": rpc error: code = NotFound desc = could not find container \"8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94\": container with ID starting with 8978498dcc02c29e76fde4cacc1a6aaa0a6cf20022b4e9c87bbf423943dfbc94 not found: ID does not exist" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.994189 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:40 crc kubenswrapper[4780]: E0219 09:49:40.994595 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-httpd" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.994608 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-httpd" Feb 19 09:49:40 crc kubenswrapper[4780]: E0219 09:49:40.994625 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-log" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.994631 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-log" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.994795 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-log" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.994813 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" containerName="glance-httpd" Feb 19 09:49:40 crc kubenswrapper[4780]: I0219 09:49:40.995705 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.000095 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.003714 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129031 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdgjn\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129384 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129544 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.129845 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.231851 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.231920 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.231993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdgjn\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.232025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.232067 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.232146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.232191 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.233005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.233118 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.236788 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.236818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.238262 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.240654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.249302 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdgjn\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn\") pod \"glance-default-internal-api-0\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.320634 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.905352 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.922546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerStarted","Data":"ca2e62f877049d8a122ad02aab47a7044cadffa2b8f6c925f9ad7882f1c7fa9f"} Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.931875 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerStarted","Data":"71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf"} Feb 19 09:49:41 crc kubenswrapper[4780]: I0219 09:49:41.977735 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.97771241 podStartE2EDuration="3.97771241s" podCreationTimestamp="2026-02-19 09:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:41.952686939 +0000 UTC m=+5324.696344388" watchObservedRunningTime="2026-02-19 09:49:41.97771241 +0000 UTC m=+5324.721369859" Feb 19 09:49:42 crc kubenswrapper[4780]: I0219 09:49:42.003844 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd9e411-4a8d-41a3-96fd-38a0a605c5a3" path="/var/lib/kubelet/pods/1fd9e411-4a8d-41a3-96fd-38a0a605c5a3/volumes" Feb 19 09:49:42 crc kubenswrapper[4780]: I0219 09:49:42.950783 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerStarted","Data":"5a0e3983ef317dbfdfc3b88c762a3201bad158bb4a54fb8c70b9deeb2890293e"} Feb 19 09:49:43 crc kubenswrapper[4780]: I0219 09:49:43.971732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerStarted","Data":"e186890e54710a2680669fb6aac5af9c2c432e4fb5f603f6eeddb1cedb154837"} Feb 19 09:49:43 crc kubenswrapper[4780]: I0219 09:49:43.998601 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9985757 podStartE2EDuration="3.9985757s" podCreationTimestamp="2026-02-19 09:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:43.99277149 +0000 UTC m=+5326.736428979" watchObservedRunningTime="2026-02-19 09:49:43.9985757 +0000 UTC m=+5326.742233159" Feb 19 09:49:45 crc kubenswrapper[4780]: I0219 09:49:45.522474 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:49:45 crc kubenswrapper[4780]: I0219 09:49:45.583347 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:49:45 crc kubenswrapper[4780]: I0219 09:49:45.583591 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="dnsmasq-dns" containerID="cri-o://71774290b0d877a0fbc1f5b6be2a5b0040f42c15dc98f83069a6b45edf660201" gracePeriod=10 Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.012513 4780 generic.go:334] "Generic (PLEG): container finished" podID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerID="71774290b0d877a0fbc1f5b6be2a5b0040f42c15dc98f83069a6b45edf660201" exitCode=0 Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.012560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" event={"ID":"e0b14c5b-fb81-41da-87c2-b7d56c0db67a","Type":"ContainerDied","Data":"71774290b0d877a0fbc1f5b6be2a5b0040f42c15dc98f83069a6b45edf660201"} Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.012590 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" event={"ID":"e0b14c5b-fb81-41da-87c2-b7d56c0db67a","Type":"ContainerDied","Data":"533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72"} Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.012602 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533a62ccf7e96212fa92a3878b87b8a1c030f0460f7c891838c7447e3f844b72" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.058274 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.204947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config\") pod \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.205302 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb\") pod \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.205394 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc\") pod \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.205492 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb\") pod \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.205650 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvpl\" (UniqueName: \"kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl\") pod \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\" (UID: \"e0b14c5b-fb81-41da-87c2-b7d56c0db67a\") " Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.214523 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl" (OuterVolumeSpecName: "kube-api-access-6bvpl") pod "e0b14c5b-fb81-41da-87c2-b7d56c0db67a" (UID: "e0b14c5b-fb81-41da-87c2-b7d56c0db67a"). InnerVolumeSpecName "kube-api-access-6bvpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.248895 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0b14c5b-fb81-41da-87c2-b7d56c0db67a" (UID: "e0b14c5b-fb81-41da-87c2-b7d56c0db67a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.250869 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0b14c5b-fb81-41da-87c2-b7d56c0db67a" (UID: "e0b14c5b-fb81-41da-87c2-b7d56c0db67a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.258116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0b14c5b-fb81-41da-87c2-b7d56c0db67a" (UID: "e0b14c5b-fb81-41da-87c2-b7d56c0db67a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.284048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config" (OuterVolumeSpecName: "config") pod "e0b14c5b-fb81-41da-87c2-b7d56c0db67a" (UID: "e0b14c5b-fb81-41da-87c2-b7d56c0db67a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.307643 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.307675 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.307684 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.307693 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bvpl\" (UniqueName: \"kubernetes.io/projected/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-kube-api-access-6bvpl\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:46 crc kubenswrapper[4780]: I0219 09:49:46.307705 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b14c5b-fb81-41da-87c2-b7d56c0db67a-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:47 crc kubenswrapper[4780]: I0219 09:49:47.020326 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d8d6f48f-jrdbv" Feb 19 09:49:47 crc kubenswrapper[4780]: I0219 09:49:47.069666 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:49:47 crc kubenswrapper[4780]: I0219 09:49:47.076223 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77d8d6f48f-jrdbv"] Feb 19 09:49:47 crc kubenswrapper[4780]: I0219 09:49:47.956103 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" path="/var/lib/kubelet/pods/e0b14c5b-fb81-41da-87c2-b7d56c0db67a/volumes" Feb 19 09:49:49 crc kubenswrapper[4780]: I0219 09:49:49.285724 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 09:49:49 crc kubenswrapper[4780]: I0219 09:49:49.285793 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 09:49:49 crc kubenswrapper[4780]: I0219 09:49:49.342949 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 09:49:49 crc kubenswrapper[4780]: I0219 09:49:49.373665 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 09:49:50 crc kubenswrapper[4780]: I0219 09:49:50.047209 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 09:49:50 crc kubenswrapper[4780]: I0219 09:49:50.047264 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.321096 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.321460 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.360807 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.360948 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.888311 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 09:49:51 crc kubenswrapper[4780]: I0219 09:49:51.894701 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 09:49:52 crc kubenswrapper[4780]: I0219 09:49:52.063624 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:52 crc kubenswrapper[4780]: I0219 09:49:52.063664 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:53 crc kubenswrapper[4780]: I0219 09:49:53.871342 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 09:49:53 crc kubenswrapper[4780]: I0219 09:49:53.899285 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.803930 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-w6tg6"] Feb 19 09:50:01 crc kubenswrapper[4780]: E0219 09:50:01.804904 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="dnsmasq-dns" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.804924 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="dnsmasq-dns" Feb 19 09:50:01 crc kubenswrapper[4780]: E0219 09:50:01.804957 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="init" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.804965 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="init" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.805463 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b14c5b-fb81-41da-87c2-b7d56c0db67a" containerName="dnsmasq-dns" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.806210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.818610 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w6tg6"] Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.904385 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8705-account-create-update-c4zbl"] Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.905674 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.907999 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.919403 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8705-account-create-update-c4zbl"] Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.948154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:01 crc kubenswrapper[4780]: I0219 09:50:01.948446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwr9\" (UniqueName: \"kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.050877 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwr9\" (UniqueName: \"kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.051094 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.051338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ptb\" (UniqueName: \"kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.051433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.053111 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.075931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwr9\" (UniqueName: \"kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9\") pod \"placement-db-create-w6tg6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.125463 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.152893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.153430 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ptb\" (UniqueName: \"kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.153592 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.171579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ptb\" (UniqueName: \"kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb\") pod \"placement-8705-account-create-update-c4zbl\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.223152 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.560378 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w6tg6"] Feb 19 09:50:02 crc kubenswrapper[4780]: W0219 09:50:02.560804 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9f8f67a_63a8_4db2_819b_8296f36210f6.slice/crio-434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7 WatchSource:0}: Error finding container 434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7: Status 404 returned error can't find the container with id 434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7 Feb 19 09:50:02 crc kubenswrapper[4780]: I0219 09:50:02.648777 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8705-account-create-update-c4zbl"] Feb 19 09:50:02 crc kubenswrapper[4780]: W0219 09:50:02.656449 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f4800ec_5ca3_46ef_bce7_ed31e16d6716.slice/crio-d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464 WatchSource:0}: Error finding container d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464: Status 404 returned error can't find the container with id d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464 Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.212140 4780 generic.go:334] "Generic (PLEG): container finished" podID="c9f8f67a-63a8-4db2-819b-8296f36210f6" containerID="3ccdba25532a400b8cbfddda2c6dfac298db8ed1abd6301a8b253a222b9c7f16" exitCode=0 Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.212222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w6tg6" event={"ID":"c9f8f67a-63a8-4db2-819b-8296f36210f6","Type":"ContainerDied","Data":"3ccdba25532a400b8cbfddda2c6dfac298db8ed1abd6301a8b253a222b9c7f16"} Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.212254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w6tg6" event={"ID":"c9f8f67a-63a8-4db2-819b-8296f36210f6","Type":"ContainerStarted","Data":"434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7"} Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.215837 4780 generic.go:334] "Generic (PLEG): container finished" podID="5f4800ec-5ca3-46ef-bce7-ed31e16d6716" containerID="f5c42dae23ffbe4bf90a1fa011fa0138106ae185fb95e6c7b84b2938e26c8e9c" exitCode=0 Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.215900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8705-account-create-update-c4zbl" event={"ID":"5f4800ec-5ca3-46ef-bce7-ed31e16d6716","Type":"ContainerDied","Data":"f5c42dae23ffbe4bf90a1fa011fa0138106ae185fb95e6c7b84b2938e26c8e9c"} Feb 19 09:50:03 crc kubenswrapper[4780]: I0219 09:50:03.215947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8705-account-create-update-c4zbl" event={"ID":"5f4800ec-5ca3-46ef-bce7-ed31e16d6716","Type":"ContainerStarted","Data":"d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464"} Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.665444 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.698795 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.806065 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts\") pod \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.806149 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krwr9\" (UniqueName: \"kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9\") pod \"c9f8f67a-63a8-4db2-819b-8296f36210f6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.806225 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4ptb\" (UniqueName: \"kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb\") pod \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\" (UID: \"5f4800ec-5ca3-46ef-bce7-ed31e16d6716\") " Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.806272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts\") pod \"c9f8f67a-63a8-4db2-819b-8296f36210f6\" (UID: \"c9f8f67a-63a8-4db2-819b-8296f36210f6\") " Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.807009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f4800ec-5ca3-46ef-bce7-ed31e16d6716" (UID: "5f4800ec-5ca3-46ef-bce7-ed31e16d6716"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.807008 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9f8f67a-63a8-4db2-819b-8296f36210f6" (UID: "c9f8f67a-63a8-4db2-819b-8296f36210f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.811799 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb" (OuterVolumeSpecName: "kube-api-access-k4ptb") pod "5f4800ec-5ca3-46ef-bce7-ed31e16d6716" (UID: "5f4800ec-5ca3-46ef-bce7-ed31e16d6716"). InnerVolumeSpecName "kube-api-access-k4ptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.814270 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9" (OuterVolumeSpecName: "kube-api-access-krwr9") pod "c9f8f67a-63a8-4db2-819b-8296f36210f6" (UID: "c9f8f67a-63a8-4db2-819b-8296f36210f6"). InnerVolumeSpecName "kube-api-access-krwr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.908707 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9f8f67a-63a8-4db2-819b-8296f36210f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.908749 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.908762 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krwr9\" (UniqueName: \"kubernetes.io/projected/c9f8f67a-63a8-4db2-819b-8296f36210f6-kube-api-access-krwr9\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:04 crc kubenswrapper[4780]: I0219 09:50:04.908776 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4ptb\" (UniqueName: \"kubernetes.io/projected/5f4800ec-5ca3-46ef-bce7-ed31e16d6716-kube-api-access-k4ptb\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.233820 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8705-account-create-update-c4zbl" event={"ID":"5f4800ec-5ca3-46ef-bce7-ed31e16d6716","Type":"ContainerDied","Data":"d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464"} Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.233863 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82b54fde0ab14343b7c0082c8f7775b182a868160a8757653a249cd85622464" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.233924 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8705-account-create-update-c4zbl" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.239307 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w6tg6" event={"ID":"c9f8f67a-63a8-4db2-819b-8296f36210f6","Type":"ContainerDied","Data":"434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7"} Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.239363 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="434dda7d7ad4a395b6350513847eb8cd7f752b2013949d02ef1012005b3d26a7" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.239486 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w6tg6" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.338830 4780 scope.go:117] "RemoveContainer" containerID="a85a65c10871124fdb8b32374f1a828a0fa86496f3056a71ed0dfc01bd88c037" Feb 19 09:50:05 crc kubenswrapper[4780]: I0219 09:50:05.369087 4780 scope.go:117] "RemoveContainer" containerID="68074719f44ced2f337ff2e6d04540359a34b95c5948dfdb11717b01bc28de6e" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.246096 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-z6n6g"] Feb 19 09:50:07 crc kubenswrapper[4780]: E0219 09:50:07.253992 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f8f67a-63a8-4db2-819b-8296f36210f6" containerName="mariadb-database-create" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.254034 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f8f67a-63a8-4db2-819b-8296f36210f6" containerName="mariadb-database-create" Feb 19 09:50:07 crc kubenswrapper[4780]: E0219 09:50:07.254077 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4800ec-5ca3-46ef-bce7-ed31e16d6716" containerName="mariadb-account-create-update" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.254086 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4800ec-5ca3-46ef-bce7-ed31e16d6716" containerName="mariadb-account-create-update" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.254308 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4800ec-5ca3-46ef-bce7-ed31e16d6716" containerName="mariadb-account-create-update" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.254345 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f8f67a-63a8-4db2-819b-8296f36210f6" containerName="mariadb-database-create" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.255066 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.257028 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.257675 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-m4j2p" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.257880 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.261364 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z6n6g"] Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.347698 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.349973 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.356336 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.377925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.378000 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw45j\" (UniqueName: \"kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.378022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.378049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.378073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw45j\" (UniqueName: \"kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480280 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf94g\" (UniqueName: \"kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480352 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480415 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.480831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.484877 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.487092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.488348 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.497426 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw45j\" (UniqueName: \"kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j\") pod \"placement-db-sync-z6n6g\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.579498 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.582284 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.582692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf94g\" (UniqueName: \"kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.582728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.582809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.582883 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.583307 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.583961 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.584029 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.585484 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.603360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf94g\" (UniqueName: \"kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g\") pod \"dnsmasq-dns-764c8d5b6c-gb9ph\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:07 crc kubenswrapper[4780]: I0219 09:50:07.670601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:08 crc kubenswrapper[4780]: I0219 09:50:08.085437 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z6n6g"] Feb 19 09:50:08 crc kubenswrapper[4780]: I0219 09:50:08.194208 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:50:08 crc kubenswrapper[4780]: W0219 09:50:08.196750 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ad88047_8ace_4ac4_abbd_598c0ed48c26.slice/crio-12601749dd4c7800d78d470b21d8bb3295ce8682c0d59f742a3ab8062f616bf1 WatchSource:0}: Error finding container 12601749dd4c7800d78d470b21d8bb3295ce8682c0d59f742a3ab8062f616bf1: Status 404 returned error can't find the container with id 12601749dd4c7800d78d470b21d8bb3295ce8682c0d59f742a3ab8062f616bf1 Feb 19 09:50:08 crc kubenswrapper[4780]: I0219 09:50:08.262933 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6n6g" event={"ID":"7cd66800-94d4-455d-b1da-1f262385b717","Type":"ContainerStarted","Data":"3b9d45c8424ac171fb09acfe9aede25c52ce0278b5ec1fd1b7b0abacfb08b379"} Feb 19 09:50:08 crc kubenswrapper[4780]: I0219 09:50:08.265406 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" event={"ID":"2ad88047-8ace-4ac4-abbd-598c0ed48c26","Type":"ContainerStarted","Data":"12601749dd4c7800d78d470b21d8bb3295ce8682c0d59f742a3ab8062f616bf1"} Feb 19 09:50:09 crc kubenswrapper[4780]: I0219 09:50:09.316872 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerID="4ee7917106c6ee2236e4526e34ac024636ab92d9402e0314f43896e39129ad7a" exitCode=0 Feb 19 09:50:09 crc kubenswrapper[4780]: I0219 09:50:09.316990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" event={"ID":"2ad88047-8ace-4ac4-abbd-598c0ed48c26","Type":"ContainerDied","Data":"4ee7917106c6ee2236e4526e34ac024636ab92d9402e0314f43896e39129ad7a"} Feb 19 09:50:09 crc kubenswrapper[4780]: I0219 09:50:09.334801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6n6g" event={"ID":"7cd66800-94d4-455d-b1da-1f262385b717","Type":"ContainerStarted","Data":"39de7f44f4703ce0130ec0e64869ef912ab7b085b71776c2e157ce821fd3e4f4"} Feb 19 09:50:09 crc kubenswrapper[4780]: I0219 09:50:09.389015 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-z6n6g" podStartSLOduration=2.388994613 podStartE2EDuration="2.388994613s" podCreationTimestamp="2026-02-19 09:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:09.374882256 +0000 UTC m=+5352.118539705" watchObservedRunningTime="2026-02-19 09:50:09.388994613 +0000 UTC m=+5352.132652062" Feb 19 09:50:10 crc kubenswrapper[4780]: I0219 09:50:10.346994 4780 generic.go:334] "Generic (PLEG): container finished" podID="7cd66800-94d4-455d-b1da-1f262385b717" containerID="39de7f44f4703ce0130ec0e64869ef912ab7b085b71776c2e157ce821fd3e4f4" exitCode=0 Feb 19 09:50:10 crc kubenswrapper[4780]: I0219 09:50:10.347070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6n6g" event={"ID":"7cd66800-94d4-455d-b1da-1f262385b717","Type":"ContainerDied","Data":"39de7f44f4703ce0130ec0e64869ef912ab7b085b71776c2e157ce821fd3e4f4"} Feb 19 09:50:10 crc kubenswrapper[4780]: I0219 09:50:10.349772 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" event={"ID":"2ad88047-8ace-4ac4-abbd-598c0ed48c26","Type":"ContainerStarted","Data":"d234a4ca9a722a474abdc8be312eb1abaa6d338c2addd9bb0fcef3e033d50386"} Feb 19 09:50:10 crc kubenswrapper[4780]: I0219 09:50:10.350199 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:10 crc kubenswrapper[4780]: I0219 09:50:10.424289 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" podStartSLOduration=3.424267525 podStartE2EDuration="3.424267525s" podCreationTimestamp="2026-02-19 09:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:10.41564289 +0000 UTC m=+5353.159300349" watchObservedRunningTime="2026-02-19 09:50:10.424267525 +0000 UTC m=+5353.167924984" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.759869 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.863361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs\") pod \"7cd66800-94d4-455d-b1da-1f262385b717\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.863489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts\") pod \"7cd66800-94d4-455d-b1da-1f262385b717\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.863937 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs" (OuterVolumeSpecName: "logs") pod "7cd66800-94d4-455d-b1da-1f262385b717" (UID: "7cd66800-94d4-455d-b1da-1f262385b717"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.864651 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data\") pod \"7cd66800-94d4-455d-b1da-1f262385b717\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.864704 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw45j\" (UniqueName: \"kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j\") pod \"7cd66800-94d4-455d-b1da-1f262385b717\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.864789 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle\") pod \"7cd66800-94d4-455d-b1da-1f262385b717\" (UID: \"7cd66800-94d4-455d-b1da-1f262385b717\") " Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.865435 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd66800-94d4-455d-b1da-1f262385b717-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.870112 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts" (OuterVolumeSpecName: "scripts") pod "7cd66800-94d4-455d-b1da-1f262385b717" (UID: "7cd66800-94d4-455d-b1da-1f262385b717"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.871169 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j" (OuterVolumeSpecName: "kube-api-access-hw45j") pod "7cd66800-94d4-455d-b1da-1f262385b717" (UID: "7cd66800-94d4-455d-b1da-1f262385b717"). InnerVolumeSpecName "kube-api-access-hw45j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.888842 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data" (OuterVolumeSpecName: "config-data") pod "7cd66800-94d4-455d-b1da-1f262385b717" (UID: "7cd66800-94d4-455d-b1da-1f262385b717"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.895018 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cd66800-94d4-455d-b1da-1f262385b717" (UID: "7cd66800-94d4-455d-b1da-1f262385b717"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.967789 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.967837 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.967860 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw45j\" (UniqueName: \"kubernetes.io/projected/7cd66800-94d4-455d-b1da-1f262385b717-kube-api-access-hw45j\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:11 crc kubenswrapper[4780]: I0219 09:50:11.967880 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd66800-94d4-455d-b1da-1f262385b717-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.370370 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6n6g" event={"ID":"7cd66800-94d4-455d-b1da-1f262385b717","Type":"ContainerDied","Data":"3b9d45c8424ac171fb09acfe9aede25c52ce0278b5ec1fd1b7b0abacfb08b379"} Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.370423 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9d45c8424ac171fb09acfe9aede25c52ce0278b5ec1fd1b7b0abacfb08b379" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.370445 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6n6g" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.858641 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59d4db5886-9lhpt"] Feb 19 09:50:12 crc kubenswrapper[4780]: E0219 09:50:12.859657 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd66800-94d4-455d-b1da-1f262385b717" containerName="placement-db-sync" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.859686 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd66800-94d4-455d-b1da-1f262385b717" containerName="placement-db-sync" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.859931 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd66800-94d4-455d-b1da-1f262385b717" containerName="placement-db-sync" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.861167 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.865338 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-m4j2p" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.865546 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.867276 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.869059 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59d4db5886-9lhpt"] Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.986829 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-scripts\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.987106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79058d64-55d1-481a-92a4-65f605b10a3b-logs\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.987191 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-config-data\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.987304 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tbc\" (UniqueName: \"kubernetes.io/projected/79058d64-55d1-481a-92a4-65f605b10a3b-kube-api-access-h5tbc\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:12 crc kubenswrapper[4780]: I0219 09:50:12.987398 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-combined-ca-bundle\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.088842 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-combined-ca-bundle\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.088926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-scripts\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.089001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79058d64-55d1-481a-92a4-65f605b10a3b-logs\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.089146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-config-data\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.089288 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tbc\" (UniqueName: \"kubernetes.io/projected/79058d64-55d1-481a-92a4-65f605b10a3b-kube-api-access-h5tbc\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.089575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79058d64-55d1-481a-92a4-65f605b10a3b-logs\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.093030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-config-data\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.094160 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-combined-ca-bundle\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.099729 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79058d64-55d1-481a-92a4-65f605b10a3b-scripts\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.115463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tbc\" (UniqueName: \"kubernetes.io/projected/79058d64-55d1-481a-92a4-65f605b10a3b-kube-api-access-h5tbc\") pod \"placement-59d4db5886-9lhpt\" (UID: \"79058d64-55d1-481a-92a4-65f605b10a3b\") " pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.183496 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:13 crc kubenswrapper[4780]: I0219 09:50:13.657804 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59d4db5886-9lhpt"] Feb 19 09:50:13 crc kubenswrapper[4780]: W0219 09:50:13.674781 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79058d64_55d1_481a_92a4_65f605b10a3b.slice/crio-3beed74b4175b4b62ef829bf9553832e80e7218da53562a57e7fce8c244c78f3 WatchSource:0}: Error finding container 3beed74b4175b4b62ef829bf9553832e80e7218da53562a57e7fce8c244c78f3: Status 404 returned error can't find the container with id 3beed74b4175b4b62ef829bf9553832e80e7218da53562a57e7fce8c244c78f3 Feb 19 09:50:14 crc kubenswrapper[4780]: I0219 09:50:14.407874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d4db5886-9lhpt" event={"ID":"79058d64-55d1-481a-92a4-65f605b10a3b","Type":"ContainerStarted","Data":"b9355e56d175ba95ebd655fb84c1951a384f908df285c79c86547bb3595305c6"} Feb 19 09:50:14 crc kubenswrapper[4780]: I0219 09:50:14.407920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d4db5886-9lhpt" event={"ID":"79058d64-55d1-481a-92a4-65f605b10a3b","Type":"ContainerStarted","Data":"afe2a343587ffb24cfdb6298bd3d444d3d7c71259e075f8a66ea71f54ee8f8e7"} Feb 19 09:50:14 crc kubenswrapper[4780]: I0219 09:50:14.407936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d4db5886-9lhpt" event={"ID":"79058d64-55d1-481a-92a4-65f605b10a3b","Type":"ContainerStarted","Data":"3beed74b4175b4b62ef829bf9553832e80e7218da53562a57e7fce8c244c78f3"} Feb 19 09:50:14 crc kubenswrapper[4780]: I0219 09:50:14.408117 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:14 crc kubenswrapper[4780]: I0219 09:50:14.434641 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59d4db5886-9lhpt" podStartSLOduration=2.43462177 podStartE2EDuration="2.43462177s" podCreationTimestamp="2026-02-19 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:14.424332922 +0000 UTC m=+5357.167990381" watchObservedRunningTime="2026-02-19 09:50:14.43462177 +0000 UTC m=+5357.178279229" Feb 19 09:50:15 crc kubenswrapper[4780]: I0219 09:50:15.414995 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:17 crc kubenswrapper[4780]: I0219 09:50:17.673075 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:50:17 crc kubenswrapper[4780]: I0219 09:50:17.759033 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:50:17 crc kubenswrapper[4780]: I0219 09:50:17.759320 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="dnsmasq-dns" containerID="cri-o://4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab" gracePeriod=10 Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.352291 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.418611 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc\") pod \"6d72bf52-29ef-424f-a119-77d73311af2c\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.418726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb\") pod \"6d72bf52-29ef-424f-a119-77d73311af2c\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.418822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtf2f\" (UniqueName: \"kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f\") pod \"6d72bf52-29ef-424f-a119-77d73311af2c\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.418849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config\") pod \"6d72bf52-29ef-424f-a119-77d73311af2c\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.418902 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb\") pod \"6d72bf52-29ef-424f-a119-77d73311af2c\" (UID: \"6d72bf52-29ef-424f-a119-77d73311af2c\") " Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.425520 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f" (OuterVolumeSpecName: "kube-api-access-rtf2f") pod "6d72bf52-29ef-424f-a119-77d73311af2c" (UID: "6d72bf52-29ef-424f-a119-77d73311af2c"). InnerVolumeSpecName "kube-api-access-rtf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.453689 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d72bf52-29ef-424f-a119-77d73311af2c" containerID="4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab" exitCode=0 Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.453970 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.453890 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" event={"ID":"6d72bf52-29ef-424f-a119-77d73311af2c","Type":"ContainerDied","Data":"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab"} Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.454289 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d98cdb5d5-7hrcm" event={"ID":"6d72bf52-29ef-424f-a119-77d73311af2c","Type":"ContainerDied","Data":"ab7e2d5c22eaa1bcb9b8ef586868ab66412bc5013c07d500eb4635157bb42045"} Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.454323 4780 scope.go:117] "RemoveContainer" containerID="4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.472703 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d72bf52-29ef-424f-a119-77d73311af2c" (UID: "6d72bf52-29ef-424f-a119-77d73311af2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.491967 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d72bf52-29ef-424f-a119-77d73311af2c" (UID: "6d72bf52-29ef-424f-a119-77d73311af2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.507961 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config" (OuterVolumeSpecName: "config") pod "6d72bf52-29ef-424f-a119-77d73311af2c" (UID: "6d72bf52-29ef-424f-a119-77d73311af2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.515213 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d72bf52-29ef-424f-a119-77d73311af2c" (UID: "6d72bf52-29ef-424f-a119-77d73311af2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.519838 4780 scope.go:117] "RemoveContainer" containerID="619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.522309 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.522338 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtf2f\" (UniqueName: \"kubernetes.io/projected/6d72bf52-29ef-424f-a119-77d73311af2c-kube-api-access-rtf2f\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.522353 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.522471 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.522485 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d72bf52-29ef-424f-a119-77d73311af2c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.552755 4780 scope.go:117] "RemoveContainer" containerID="4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab" Feb 19 09:50:18 crc kubenswrapper[4780]: E0219 09:50:18.553341 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab\": container with ID starting with 4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab not found: ID does not exist" containerID="4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.553380 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab"} err="failed to get container status \"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab\": rpc error: code = NotFound desc = could not find container \"4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab\": container with ID starting with 4e427c8dba979c8fd545c987619f29719246cce31463e40693b3a747c4e6b7ab not found: ID does not exist" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.553405 4780 scope.go:117] "RemoveContainer" containerID="619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9" Feb 19 09:50:18 crc kubenswrapper[4780]: E0219 09:50:18.553882 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9\": container with ID starting with 619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9 not found: ID does not exist" containerID="619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.553928 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9"} err="failed to get container status \"619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9\": rpc error: code = NotFound desc = could not find container \"619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9\": container with ID starting with 619d831f01894f3781f9a76d3c1bd34f367d597ae256315bb598f311885d7fe9 not found: ID does not exist" Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.790383 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:50:18 crc kubenswrapper[4780]: I0219 09:50:18.801427 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d98cdb5d5-7hrcm"] Feb 19 09:50:19 crc kubenswrapper[4780]: I0219 09:50:19.961584 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" path="/var/lib/kubelet/pods/6d72bf52-29ef-424f-a119-77d73311af2c/volumes" Feb 19 09:50:44 crc kubenswrapper[4780]: I0219 09:50:44.161905 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:50:44 crc kubenswrapper[4780]: I0219 09:50:44.164479 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59d4db5886-9lhpt" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.076621 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:03 crc kubenswrapper[4780]: E0219 09:51:03.078093 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="init" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.078114 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="init" Feb 19 09:51:03 crc kubenswrapper[4780]: E0219 09:51:03.078153 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="dnsmasq-dns" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.078161 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="dnsmasq-dns" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.078346 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d72bf52-29ef-424f-a119-77d73311af2c" containerName="dnsmasq-dns" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.079779 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.090802 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.252022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.252998 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2xh\" (UniqueName: \"kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.253303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.356059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.356172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj2xh\" (UniqueName: \"kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.356303 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.356715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.356832 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.390466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj2xh\" (UniqueName: \"kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh\") pod \"redhat-marketplace-fl8m5\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.407362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.887784 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:03 crc kubenswrapper[4780]: I0219 09:51:03.952310 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerStarted","Data":"169dd26304bd3ef62751bccdffc8559441f5c4a3082ae117fddec7c383a015e1"} Feb 19 09:51:04 crc kubenswrapper[4780]: I0219 09:51:04.959955 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc8e897-683e-49ee-9f40-43296817420d" containerID="05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee" exitCode=0 Feb 19 09:51:04 crc kubenswrapper[4780]: I0219 09:51:04.961303 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerDied","Data":"05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee"} Feb 19 09:51:04 crc kubenswrapper[4780]: I0219 09:51:04.963797 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:51:05 crc kubenswrapper[4780]: I0219 09:51:05.509447 4780 scope.go:117] "RemoveContainer" containerID="dfa15500c2748fc4b25061ea04d9ee37d4ab13537453830df9dade87569d258e" Feb 19 09:51:05 crc kubenswrapper[4780]: I0219 09:51:05.535668 4780 scope.go:117] "RemoveContainer" containerID="ba04419b9adec605d045d549d9ef6588b94921edabf7e28695cd8158b13dac31" Feb 19 09:51:07 crc kubenswrapper[4780]: I0219 09:51:07.011588 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc8e897-683e-49ee-9f40-43296817420d" containerID="0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f" exitCode=0 Feb 19 09:51:07 crc kubenswrapper[4780]: I0219 09:51:07.011702 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerDied","Data":"0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f"} Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.042654 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerStarted","Data":"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b"} Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.076682 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fl8m5" podStartSLOduration=2.375470981 podStartE2EDuration="5.076654525s" podCreationTimestamp="2026-02-19 09:51:03 +0000 UTC" firstStartedPulling="2026-02-19 09:51:04.963476025 +0000 UTC m=+5407.707133474" lastFinishedPulling="2026-02-19 09:51:07.664659569 +0000 UTC m=+5410.408317018" observedRunningTime="2026-02-19 09:51:08.067117139 +0000 UTC m=+5410.810774618" watchObservedRunningTime="2026-02-19 09:51:08.076654525 +0000 UTC m=+5410.820312014" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.794687 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2tk28"] Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.796938 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.815587 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2tk28"] Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.887589 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2g2jr"] Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.889253 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.896390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6wg\" (UniqueName: \"kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.896848 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.902840 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2g2jr"] Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.992655 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b504-account-create-update-45jzf"] Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.993811 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.996632 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.998225 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.998296 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6wg\" (UniqueName: \"kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.998322 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbcrj\" (UniqueName: \"kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.998372 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:08 crc kubenswrapper[4780]: I0219 09:51:08.999074 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.007464 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b504-account-create-update-45jzf"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.024016 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6wg\" (UniqueName: \"kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg\") pod \"nova-api-db-create-2tk28\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.093395 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xwq24"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.094858 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.099836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbcrj\" (UniqueName: \"kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.099903 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.100008 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.100201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtcc\" (UniqueName: \"kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.100779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.106685 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xwq24"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.118598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.118640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbcrj\" (UniqueName: \"kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj\") pod \"nova-cell0-db-create-2g2jr\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.201199 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-eddf-account-create-update-qthjk"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.202069 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtcc\" (UniqueName: \"kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.202155 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.202191 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk7x\" (UniqueName: \"kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.202236 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.202525 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.203091 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.205847 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.205948 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.217439 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-eddf-account-create-update-qthjk"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.235520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtcc\" (UniqueName: \"kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc\") pod \"nova-api-b504-account-create-update-45jzf\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.313023 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.313179 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.313214 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgk7x\" (UniqueName: \"kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.313234 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lnp\" (UniqueName: \"kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.314056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.314383 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.338560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgk7x\" (UniqueName: \"kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x\") pod \"nova-cell1-db-create-xwq24\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.407567 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-59ad-account-create-update-mmkdh"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.409401 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.412593 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.414975 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.415103 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46lnp\" (UniqueName: \"kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.417167 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.418409 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.441261 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59ad-account-create-update-mmkdh"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.451419 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46lnp\" (UniqueName: \"kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp\") pod \"nova-cell0-eddf-account-create-update-qthjk\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.526073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.526206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xdp\" (UniqueName: \"kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.629231 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.629351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6xdp\" (UniqueName: \"kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.630295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.648346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6xdp\" (UniqueName: \"kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp\") pod \"nova-cell1-59ad-account-create-update-mmkdh\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.652285 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.756795 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.929919 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2tk28"] Feb 19 09:51:09 crc kubenswrapper[4780]: I0219 09:51:09.957369 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2g2jr"] Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.050152 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b504-account-create-update-45jzf"] Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.071618 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2g2jr" event={"ID":"b49e45b6-af97-4283-9398-af6a2b81f11f","Type":"ContainerStarted","Data":"f920f17f140f3983005aff91ec628307fc0900efa9d40e5b941bec03f5bd467c"} Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.073541 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2tk28" event={"ID":"b4fd5efa-d635-492a-9f0d-03bc75547d9f","Type":"ContainerStarted","Data":"3248360b677fd92c5fad1e17d035a83a0ea3ae7edd38f60d4a1481f45da179e8"} Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.179656 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xwq24"] Feb 19 09:51:10 crc kubenswrapper[4780]: W0219 09:51:10.197658 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f964524_b75b_44cf_b10c_82ee59a8b98a.slice/crio-a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc WatchSource:0}: Error finding container a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc: Status 404 returned error can't find the container with id a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc Feb 19 09:51:10 crc kubenswrapper[4780]: W0219 09:51:10.326478 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf30ded4_17c8_4c48_b5b5_aa689cd4b8d4.slice/crio-830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc WatchSource:0}: Error finding container 830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc: Status 404 returned error can't find the container with id 830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.331809 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-eddf-account-create-update-qthjk"] Feb 19 09:51:10 crc kubenswrapper[4780]: I0219 09:51:10.482774 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59ad-account-create-update-mmkdh"] Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.084417 4780 generic.go:334] "Generic (PLEG): container finished" podID="8f964524-b75b-44cf-b10c-82ee59a8b98a" containerID="71fab456b287f7f8d669e6c81599f27688415d408e2547af9a875447cd510acd" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.085150 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xwq24" event={"ID":"8f964524-b75b-44cf-b10c-82ee59a8b98a","Type":"ContainerDied","Data":"71fab456b287f7f8d669e6c81599f27688415d408e2547af9a875447cd510acd"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.085267 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xwq24" event={"ID":"8f964524-b75b-44cf-b10c-82ee59a8b98a","Type":"ContainerStarted","Data":"a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.087106 4780 generic.go:334] "Generic (PLEG): container finished" podID="b49e45b6-af97-4283-9398-af6a2b81f11f" containerID="f380bee83395e7e262c55ddf74173e8e169d0995f5839b82b1c3b416cd0aef96" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.087284 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2g2jr" event={"ID":"b49e45b6-af97-4283-9398-af6a2b81f11f","Type":"ContainerDied","Data":"f380bee83395e7e262c55ddf74173e8e169d0995f5839b82b1c3b416cd0aef96"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.089938 4780 generic.go:334] "Generic (PLEG): container finished" podID="b4fd5efa-d635-492a-9f0d-03bc75547d9f" containerID="ece8e0f13a76f2cd79f7ab1fcd7bc4abc401d670d2dadd0a90c67606fdb993ac" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.089971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2tk28" event={"ID":"b4fd5efa-d635-492a-9f0d-03bc75547d9f","Type":"ContainerDied","Data":"ece8e0f13a76f2cd79f7ab1fcd7bc4abc401d670d2dadd0a90c67606fdb993ac"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.092148 4780 generic.go:334] "Generic (PLEG): container finished" podID="42cdced8-eb90-4931-b0c5-7a6d803aeef8" containerID="ed69dbd290d7b2494ec99ff94a8bd7e09a117cf1d33df9362274a40e07df5086" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.092332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" event={"ID":"42cdced8-eb90-4931-b0c5-7a6d803aeef8","Type":"ContainerDied","Data":"ed69dbd290d7b2494ec99ff94a8bd7e09a117cf1d33df9362274a40e07df5086"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.092450 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" event={"ID":"42cdced8-eb90-4931-b0c5-7a6d803aeef8","Type":"ContainerStarted","Data":"6d3c951996d94c230d7486ff9f2ed9ee24739ebd6fee63ea9487de2345dbe986"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.093999 4780 generic.go:334] "Generic (PLEG): container finished" podID="bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" containerID="b793c8c9e27fe989bf8d97b8a0f99db3119513ba03cada87cc77eed99f78f0d8" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.094136 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" event={"ID":"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4","Type":"ContainerDied","Data":"b793c8c9e27fe989bf8d97b8a0f99db3119513ba03cada87cc77eed99f78f0d8"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.094160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" event={"ID":"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4","Type":"ContainerStarted","Data":"830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.095690 4780 generic.go:334] "Generic (PLEG): container finished" podID="0c5a3587-4422-416d-abda-b793c782e693" containerID="0124818e1adf352f65b325caa32f20892863a99980514839c5d57d773c0afed9" exitCode=0 Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.095716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b504-account-create-update-45jzf" event={"ID":"0c5a3587-4422-416d-abda-b793c782e693","Type":"ContainerDied","Data":"0124818e1adf352f65b325caa32f20892863a99980514839c5d57d773c0afed9"} Feb 19 09:51:11 crc kubenswrapper[4780]: I0219 09:51:11.095731 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b504-account-create-update-45jzf" event={"ID":"0c5a3587-4422-416d-abda-b793c782e693","Type":"ContainerStarted","Data":"5fb3cace979898f8e6634a3a9553d174df3fa6629475fb7157cd64a145193c5b"} Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.549211 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.620731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts\") pod \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.620818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46lnp\" (UniqueName: \"kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp\") pod \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\" (UID: \"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.622409 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" (UID: "bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.633024 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp" (OuterVolumeSpecName: "kube-api-access-46lnp") pod "bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" (UID: "bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4"). InnerVolumeSpecName "kube-api-access-46lnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.722849 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46lnp\" (UniqueName: \"kubernetes.io/projected/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-kube-api-access-46lnp\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.722896 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.736779 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.747589 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.755244 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.775301 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.781980 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.925922 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts\") pod \"b49e45b6-af97-4283-9398-af6a2b81f11f\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.926056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6wg\" (UniqueName: \"kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg\") pod \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.926192 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts\") pod \"0c5a3587-4422-416d-abda-b793c782e693\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.926345 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts\") pod \"8f964524-b75b-44cf-b10c-82ee59a8b98a\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.926385 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgk7x\" (UniqueName: \"kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x\") pod \"8f964524-b75b-44cf-b10c-82ee59a8b98a\" (UID: \"8f964524-b75b-44cf-b10c-82ee59a8b98a\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.926458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dtcc\" (UniqueName: \"kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc\") pod \"0c5a3587-4422-416d-abda-b793c782e693\" (UID: \"0c5a3587-4422-416d-abda-b793c782e693\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927004 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b49e45b6-af97-4283-9398-af6a2b81f11f" (UID: "b49e45b6-af97-4283-9398-af6a2b81f11f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927291 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts\") pod \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f964524-b75b-44cf-b10c-82ee59a8b98a" (UID: "8f964524-b75b-44cf-b10c-82ee59a8b98a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927243 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c5a3587-4422-416d-abda-b793c782e693" (UID: "0c5a3587-4422-416d-abda-b793c782e693"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927517 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts\") pod \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\" (UID: \"b4fd5efa-d635-492a-9f0d-03bc75547d9f\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927575 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6xdp\" (UniqueName: \"kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp\") pod \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\" (UID: \"42cdced8-eb90-4931-b0c5-7a6d803aeef8\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927679 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbcrj\" (UniqueName: \"kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj\") pod \"b49e45b6-af97-4283-9398-af6a2b81f11f\" (UID: \"b49e45b6-af97-4283-9398-af6a2b81f11f\") " Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.927753 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42cdced8-eb90-4931-b0c5-7a6d803aeef8" (UID: "42cdced8-eb90-4931-b0c5-7a6d803aeef8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928477 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4fd5efa-d635-492a-9f0d-03bc75547d9f" (UID: "b4fd5efa-d635-492a-9f0d-03bc75547d9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928708 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f964524-b75b-44cf-b10c-82ee59a8b98a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928729 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cdced8-eb90-4931-b0c5-7a6d803aeef8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928739 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4fd5efa-d635-492a-9f0d-03bc75547d9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928748 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49e45b6-af97-4283-9398-af6a2b81f11f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.928758 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5a3587-4422-416d-abda-b793c782e693-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.930779 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x" (OuterVolumeSpecName: "kube-api-access-tgk7x") pod "8f964524-b75b-44cf-b10c-82ee59a8b98a" (UID: "8f964524-b75b-44cf-b10c-82ee59a8b98a"). InnerVolumeSpecName "kube-api-access-tgk7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.931916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg" (OuterVolumeSpecName: "kube-api-access-7v6wg") pod "b4fd5efa-d635-492a-9f0d-03bc75547d9f" (UID: "b4fd5efa-d635-492a-9f0d-03bc75547d9f"). InnerVolumeSpecName "kube-api-access-7v6wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.932552 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj" (OuterVolumeSpecName: "kube-api-access-lbcrj") pod "b49e45b6-af97-4283-9398-af6a2b81f11f" (UID: "b49e45b6-af97-4283-9398-af6a2b81f11f"). InnerVolumeSpecName "kube-api-access-lbcrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.933538 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp" (OuterVolumeSpecName: "kube-api-access-b6xdp") pod "42cdced8-eb90-4931-b0c5-7a6d803aeef8" (UID: "42cdced8-eb90-4931-b0c5-7a6d803aeef8"). InnerVolumeSpecName "kube-api-access-b6xdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:12 crc kubenswrapper[4780]: I0219 09:51:12.935397 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc" (OuterVolumeSpecName: "kube-api-access-4dtcc") pod "0c5a3587-4422-416d-abda-b793c782e693" (UID: "0c5a3587-4422-416d-abda-b793c782e693"). InnerVolumeSpecName "kube-api-access-4dtcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.031360 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgk7x\" (UniqueName: \"kubernetes.io/projected/8f964524-b75b-44cf-b10c-82ee59a8b98a-kube-api-access-tgk7x\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.031424 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dtcc\" (UniqueName: \"kubernetes.io/projected/0c5a3587-4422-416d-abda-b793c782e693-kube-api-access-4dtcc\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.031554 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6xdp\" (UniqueName: \"kubernetes.io/projected/42cdced8-eb90-4931-b0c5-7a6d803aeef8-kube-api-access-b6xdp\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.031975 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbcrj\" (UniqueName: \"kubernetes.io/projected/b49e45b6-af97-4283-9398-af6a2b81f11f-kube-api-access-lbcrj\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.032021 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6wg\" (UniqueName: \"kubernetes.io/projected/b4fd5efa-d635-492a-9f0d-03bc75547d9f-kube-api-access-7v6wg\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.122073 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b504-account-create-update-45jzf" event={"ID":"0c5a3587-4422-416d-abda-b793c782e693","Type":"ContainerDied","Data":"5fb3cace979898f8e6634a3a9553d174df3fa6629475fb7157cd64a145193c5b"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.122119 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb3cace979898f8e6634a3a9553d174df3fa6629475fb7157cd64a145193c5b" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.122255 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b504-account-create-update-45jzf" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.125152 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xwq24" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.125169 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xwq24" event={"ID":"8f964524-b75b-44cf-b10c-82ee59a8b98a","Type":"ContainerDied","Data":"a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.125229 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b216a68fe6f7b80d2132055d6cb029d286995a1d16a1d4ac69887a71eb91bc" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.127813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2g2jr" event={"ID":"b49e45b6-af97-4283-9398-af6a2b81f11f","Type":"ContainerDied","Data":"f920f17f140f3983005aff91ec628307fc0900efa9d40e5b941bec03f5bd467c"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.127860 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2g2jr" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.127867 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f920f17f140f3983005aff91ec628307fc0900efa9d40e5b941bec03f5bd467c" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.139345 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2tk28" event={"ID":"b4fd5efa-d635-492a-9f0d-03bc75547d9f","Type":"ContainerDied","Data":"3248360b677fd92c5fad1e17d035a83a0ea3ae7edd38f60d4a1481f45da179e8"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.139709 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3248360b677fd92c5fad1e17d035a83a0ea3ae7edd38f60d4a1481f45da179e8" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.140118 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2tk28" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.143782 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" event={"ID":"42cdced8-eb90-4931-b0c5-7a6d803aeef8","Type":"ContainerDied","Data":"6d3c951996d94c230d7486ff9f2ed9ee24739ebd6fee63ea9487de2345dbe986"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.143814 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d3c951996d94c230d7486ff9f2ed9ee24739ebd6fee63ea9487de2345dbe986" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.143948 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59ad-account-create-update-mmkdh" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.149313 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" event={"ID":"bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4","Type":"ContainerDied","Data":"830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc"} Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.149409 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830283ab62d28ed39c80b61d9fffd8d27cd5ade07f82ad1a127942360828a8cc" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.149558 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eddf-account-create-update-qthjk" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.408301 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.408413 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:13 crc kubenswrapper[4780]: I0219 09:51:13.496186 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.228299 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.297747 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.487936 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hq2qh"] Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488343 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd5efa-d635-492a-9f0d-03bc75547d9f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488358 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd5efa-d635-492a-9f0d-03bc75547d9f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488373 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cdced8-eb90-4931-b0c5-7a6d803aeef8" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488379 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cdced8-eb90-4931-b0c5-7a6d803aeef8" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488394 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488400 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488419 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49e45b6-af97-4283-9398-af6a2b81f11f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488425 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49e45b6-af97-4283-9398-af6a2b81f11f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488431 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f964524-b75b-44cf-b10c-82ee59a8b98a" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488439 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f964524-b75b-44cf-b10c-82ee59a8b98a" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: E0219 09:51:14.488448 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5a3587-4422-416d-abda-b793c782e693" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488454 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5a3587-4422-416d-abda-b793c782e693" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488635 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5a3587-4422-416d-abda-b793c782e693" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488651 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488663 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49e45b6-af97-4283-9398-af6a2b81f11f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488679 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f964524-b75b-44cf-b10c-82ee59a8b98a" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488687 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cdced8-eb90-4931-b0c5-7a6d803aeef8" containerName="mariadb-account-create-update" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.488698 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fd5efa-d635-492a-9f0d-03bc75547d9f" containerName="mariadb-database-create" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.489495 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.492944 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.494172 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-d76z4" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.495634 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.517625 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hq2qh"] Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.587258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkgq\" (UniqueName: \"kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.587516 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.587574 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.587753 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.690394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkgq\" (UniqueName: \"kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.690527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.690560 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.690610 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.696982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.698231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.700670 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.714045 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkgq\" (UniqueName: \"kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq\") pod \"nova-cell0-conductor-db-sync-hq2qh\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:14 crc kubenswrapper[4780]: I0219 09:51:14.817923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:15 crc kubenswrapper[4780]: I0219 09:51:15.323975 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hq2qh"] Feb 19 09:51:15 crc kubenswrapper[4780]: W0219 09:51:15.331060 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1f356f3_3144_4be1_9276_d9d1554d0ea1.slice/crio-3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14 WatchSource:0}: Error finding container 3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14: Status 404 returned error can't find the container with id 3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14 Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.183342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" event={"ID":"a1f356f3-3144-4be1-9276-d9d1554d0ea1","Type":"ContainerStarted","Data":"6d58c596c96e2a440644c73cbb63b232ba2e0a158ce65e68922ee256360f667b"} Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.184013 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" event={"ID":"a1f356f3-3144-4be1-9276-d9d1554d0ea1","Type":"ContainerStarted","Data":"3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14"} Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.183841 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fl8m5" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="registry-server" containerID="cri-o://76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b" gracePeriod=2 Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.233943 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" podStartSLOduration=2.233906438 podStartE2EDuration="2.233906438s" podCreationTimestamp="2026-02-19 09:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:16.222225808 +0000 UTC m=+5418.965883267" watchObservedRunningTime="2026-02-19 09:51:16.233906438 +0000 UTC m=+5418.977563887" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.663190 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.734272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities\") pod \"bfc8e897-683e-49ee-9f40-43296817420d\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.734357 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj2xh\" (UniqueName: \"kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh\") pod \"bfc8e897-683e-49ee-9f40-43296817420d\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.734492 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content\") pod \"bfc8e897-683e-49ee-9f40-43296817420d\" (UID: \"bfc8e897-683e-49ee-9f40-43296817420d\") " Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.736556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities" (OuterVolumeSpecName: "utilities") pod "bfc8e897-683e-49ee-9f40-43296817420d" (UID: "bfc8e897-683e-49ee-9f40-43296817420d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.742871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh" (OuterVolumeSpecName: "kube-api-access-fj2xh") pod "bfc8e897-683e-49ee-9f40-43296817420d" (UID: "bfc8e897-683e-49ee-9f40-43296817420d"). InnerVolumeSpecName "kube-api-access-fj2xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.758736 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfc8e897-683e-49ee-9f40-43296817420d" (UID: "bfc8e897-683e-49ee-9f40-43296817420d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.836288 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.836334 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj2xh\" (UniqueName: \"kubernetes.io/projected/bfc8e897-683e-49ee-9f40-43296817420d-kube-api-access-fj2xh\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:16 crc kubenswrapper[4780]: I0219 09:51:16.836344 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8e897-683e-49ee-9f40-43296817420d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.204344 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl8m5" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.204397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerDied","Data":"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b"} Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.204472 4780 scope.go:117] "RemoveContainer" containerID="76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.204231 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc8e897-683e-49ee-9f40-43296817420d" containerID="76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b" exitCode=0 Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.204899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl8m5" event={"ID":"bfc8e897-683e-49ee-9f40-43296817420d","Type":"ContainerDied","Data":"169dd26304bd3ef62751bccdffc8559441f5c4a3082ae117fddec7c383a015e1"} Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.236016 4780 scope.go:117] "RemoveContainer" containerID="0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.275904 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.285934 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl8m5"] Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.288911 4780 scope.go:117] "RemoveContainer" containerID="05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.317446 4780 scope.go:117] "RemoveContainer" containerID="76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b" Feb 19 09:51:17 crc kubenswrapper[4780]: E0219 09:51:17.318870 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b\": container with ID starting with 76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b not found: ID does not exist" containerID="76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.318934 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b"} err="failed to get container status \"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b\": rpc error: code = NotFound desc = could not find container \"76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b\": container with ID starting with 76b2186d5029aa8556ac68b87eab135ade9c843959c5e224e644d9b4b6c4275b not found: ID does not exist" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.318979 4780 scope.go:117] "RemoveContainer" containerID="0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f" Feb 19 09:51:17 crc kubenswrapper[4780]: E0219 09:51:17.323621 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f\": container with ID starting with 0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f not found: ID does not exist" containerID="0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.323671 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f"} err="failed to get container status \"0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f\": rpc error: code = NotFound desc = could not find container \"0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f\": container with ID starting with 0707ee43ec49d88a372d811ffee5cfad166382881e6e9a4b518ea6736dc8856f not found: ID does not exist" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.323701 4780 scope.go:117] "RemoveContainer" containerID="05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee" Feb 19 09:51:17 crc kubenswrapper[4780]: E0219 09:51:17.324175 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee\": container with ID starting with 05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee not found: ID does not exist" containerID="05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.324240 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee"} err="failed to get container status \"05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee\": rpc error: code = NotFound desc = could not find container \"05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee\": container with ID starting with 05eb63e3a49afe5258996ca1d3169887dbe95a54ae0ea3d3a09f62d37e0a82ee not found: ID does not exist" Feb 19 09:51:17 crc kubenswrapper[4780]: I0219 09:51:17.958439 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc8e897-683e-49ee-9f40-43296817420d" path="/var/lib/kubelet/pods/bfc8e897-683e-49ee-9f40-43296817420d/volumes" Feb 19 09:51:25 crc kubenswrapper[4780]: I0219 09:51:25.313646 4780 generic.go:334] "Generic (PLEG): container finished" podID="a1f356f3-3144-4be1-9276-d9d1554d0ea1" containerID="6d58c596c96e2a440644c73cbb63b232ba2e0a158ce65e68922ee256360f667b" exitCode=0 Feb 19 09:51:25 crc kubenswrapper[4780]: I0219 09:51:25.313759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" event={"ID":"a1f356f3-3144-4be1-9276-d9d1554d0ea1","Type":"ContainerDied","Data":"6d58c596c96e2a440644c73cbb63b232ba2e0a158ce65e68922ee256360f667b"} Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.715550 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.808569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts\") pod \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.808848 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkgq\" (UniqueName: \"kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq\") pod \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.811292 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data\") pod \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.811531 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle\") pod \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\" (UID: \"a1f356f3-3144-4be1-9276-d9d1554d0ea1\") " Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.817616 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts" (OuterVolumeSpecName: "scripts") pod "a1f356f3-3144-4be1-9276-d9d1554d0ea1" (UID: "a1f356f3-3144-4be1-9276-d9d1554d0ea1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.817907 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq" (OuterVolumeSpecName: "kube-api-access-tpkgq") pod "a1f356f3-3144-4be1-9276-d9d1554d0ea1" (UID: "a1f356f3-3144-4be1-9276-d9d1554d0ea1"). InnerVolumeSpecName "kube-api-access-tpkgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.838776 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data" (OuterVolumeSpecName: "config-data") pod "a1f356f3-3144-4be1-9276-d9d1554d0ea1" (UID: "a1f356f3-3144-4be1-9276-d9d1554d0ea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.853313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1f356f3-3144-4be1-9276-d9d1554d0ea1" (UID: "a1f356f3-3144-4be1-9276-d9d1554d0ea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.915646 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.915702 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkgq\" (UniqueName: \"kubernetes.io/projected/a1f356f3-3144-4be1-9276-d9d1554d0ea1-kube-api-access-tpkgq\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.915729 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:26 crc kubenswrapper[4780]: I0219 09:51:26.915750 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f356f3-3144-4be1-9276-d9d1554d0ea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.337817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" event={"ID":"a1f356f3-3144-4be1-9276-d9d1554d0ea1","Type":"ContainerDied","Data":"3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14"} Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.337884 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b078e2763c05b8afc3044d62543ae0845d98c089669d8323344fda53ccc7d14" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.338013 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hq2qh" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.447119 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:51:27 crc kubenswrapper[4780]: E0219 09:51:27.447945 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="registry-server" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.447979 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="registry-server" Feb 19 09:51:27 crc kubenswrapper[4780]: E0219 09:51:27.448035 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="extract-content" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.448049 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="extract-content" Feb 19 09:51:27 crc kubenswrapper[4780]: E0219 09:51:27.448101 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f356f3-3144-4be1-9276-d9d1554d0ea1" containerName="nova-cell0-conductor-db-sync" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.448114 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f356f3-3144-4be1-9276-d9d1554d0ea1" containerName="nova-cell0-conductor-db-sync" Feb 19 09:51:27 crc kubenswrapper[4780]: E0219 09:51:27.448165 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="extract-utilities" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.448180 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="extract-utilities" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.448499 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc8e897-683e-49ee-9f40-43296817420d" containerName="registry-server" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.448538 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f356f3-3144-4be1-9276-d9d1554d0ea1" containerName="nova-cell0-conductor-db-sync" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.449740 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.457751 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.461394 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-d76z4" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.463889 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.531712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglj8\" (UniqueName: \"kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.532830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.533523 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.636195 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.636786 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.636959 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglj8\" (UniqueName: \"kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.640615 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.652590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.657293 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglj8\" (UniqueName: \"kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8\") pod \"nova-cell0-conductor-0\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:27 crc kubenswrapper[4780]: I0219 09:51:27.790419 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:28 crc kubenswrapper[4780]: I0219 09:51:28.215440 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:51:28 crc kubenswrapper[4780]: W0219 09:51:28.225638 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3785f226_0b67_49df_bbf8_047fab757679.slice/crio-fe9e7483c60807449b2320f031b0a5022846db06362ebcf651a2af0d8baae146 WatchSource:0}: Error finding container fe9e7483c60807449b2320f031b0a5022846db06362ebcf651a2af0d8baae146: Status 404 returned error can't find the container with id fe9e7483c60807449b2320f031b0a5022846db06362ebcf651a2af0d8baae146 Feb 19 09:51:28 crc kubenswrapper[4780]: I0219 09:51:28.346499 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3785f226-0b67-49df-bbf8-047fab757679","Type":"ContainerStarted","Data":"fe9e7483c60807449b2320f031b0a5022846db06362ebcf651a2af0d8baae146"} Feb 19 09:51:29 crc kubenswrapper[4780]: I0219 09:51:29.362157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3785f226-0b67-49df-bbf8-047fab757679","Type":"ContainerStarted","Data":"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377"} Feb 19 09:51:29 crc kubenswrapper[4780]: I0219 09:51:29.362853 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:29 crc kubenswrapper[4780]: I0219 09:51:29.433676 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.433640147 podStartE2EDuration="2.433640147s" podCreationTimestamp="2026-02-19 09:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:29.393856443 +0000 UTC m=+5432.137513932" watchObservedRunningTime="2026-02-19 09:51:29.433640147 +0000 UTC m=+5432.177297606" Feb 19 09:51:36 crc kubenswrapper[4780]: I0219 09:51:36.336476 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:51:36 crc kubenswrapper[4780]: I0219 09:51:36.336930 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:51:37 crc kubenswrapper[4780]: I0219 09:51:37.818033 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.325690 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cthrb"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.327224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.330276 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.333363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.351079 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cthrb"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.455898 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rmv\" (UniqueName: \"kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.455957 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.456020 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.456172 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.475459 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.477309 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.491814 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.566634 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qmt2\" (UniqueName: \"kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583331 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583393 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rmv\" (UniqueName: \"kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583465 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583591 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.583663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.592854 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.600443 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.612296 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rmv\" (UniqueName: \"kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.626283 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cthrb\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.654956 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.656568 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.657351 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.664255 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.671615 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.707350 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.711936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qmt2\" (UniqueName: \"kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.712026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.712310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.712402 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.712751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.723632 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.731646 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.732938 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.735793 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.736529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.744821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qmt2\" (UniqueName: \"kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2\") pod \"nova-api-0\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.768236 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.770324 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.806549 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.814209 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.815805 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.817813 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819088 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819166 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819225 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhc9b\" (UniqueName: \"kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819395 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crvr\" (UniqueName: \"kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.819661 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.833677 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.843246 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921817 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crvr\" (UniqueName: \"kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921916 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921945 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzwk\" (UniqueName: \"kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921965 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.921983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922069 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922158 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922177 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxh7m\" (UniqueName: \"kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.922237 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhc9b\" (UniqueName: \"kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.927604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.932188 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.945804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.946395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.949375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crvr\" (UniqueName: \"kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr\") pod \"nova-metadata-0\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " pod="openstack/nova-metadata-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.958854 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:38 crc kubenswrapper[4780]: I0219 09:51:38.979695 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhc9b\" (UniqueName: \"kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.023497 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.023874 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzwk\" (UniqueName: \"kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.023911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.023931 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.023998 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.024026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.024048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxh7m\" (UniqueName: \"kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.024086 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.025076 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.025240 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.028097 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.029040 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.034279 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.040461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.061011 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxh7m\" (UniqueName: \"kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m\") pod \"dnsmasq-dns-6cc4766b9-h5zqn\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.081758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzwk\" (UniqueName: \"kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk\") pod \"nova-scheduler-0\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.146088 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.202760 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.236531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.293715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.424503 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cthrb"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.508773 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cthrb" event={"ID":"4f78c83f-0497-4f37-bd31-e73228b93e78","Type":"ContainerStarted","Data":"cc3994f0f614954d2704e9c818d1204cd4300a03481ad42779d5a259129788ad"} Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.512089 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.684746 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxgjt"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.686273 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.688766 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.688988 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.701475 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxgjt"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.778884 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:39 crc kubenswrapper[4780]: W0219 09:51:39.783296 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b167d31_18d2_4193_a336_586e3c7e2b05.slice/crio-0a3fff8e421334790d9c1a1fa6aef12e833b21221f443194adf5fb4bdb276eb8 WatchSource:0}: Error finding container 0a3fff8e421334790d9c1a1fa6aef12e833b21221f443194adf5fb4bdb276eb8: Status 404 returned error can't find the container with id 0a3fff8e421334790d9c1a1fa6aef12e833b21221f443194adf5fb4bdb276eb8 Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.849628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.849745 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.849787 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.849861 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9cc5\" (UniqueName: \"kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.926680 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.937767 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.953101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.953474 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9cc5\" (UniqueName: \"kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.953614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.953725 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.957905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.962148 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.964849 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:39 crc kubenswrapper[4780]: I0219 09:51:39.975811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9cc5\" (UniqueName: \"kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5\") pod \"nova-cell1-conductor-db-sync-qxgjt\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.019724 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.214024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.535298 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5af5e1-ec66-48de-bdd0-1bd26eb64016","Type":"ContainerStarted","Data":"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.535851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5af5e1-ec66-48de-bdd0-1bd26eb64016","Type":"ContainerStarted","Data":"b69a93dadcdb6310ce90f3a48ac61c3001a9ea532eb849c9e6d76225188cce3c"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.538700 4780 generic.go:334] "Generic (PLEG): container finished" podID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerID="131d93aded5cc873d9e4f37a23ac1ee0950fdf430209547b98e466635beaa688" exitCode=0 Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.538762 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" event={"ID":"8e6a81ab-c7b2-429e-a438-09c4268ceee2","Type":"ContainerDied","Data":"131d93aded5cc873d9e4f37a23ac1ee0950fdf430209547b98e466635beaa688"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.538789 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" event={"ID":"8e6a81ab-c7b2-429e-a438-09c4268ceee2","Type":"ContainerStarted","Data":"da785d77b8d70cc1b3c3287bd0da44a50bddca24618642597736ddf94ea07ebc"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.551240 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cthrb" event={"ID":"4f78c83f-0497-4f37-bd31-e73228b93e78","Type":"ContainerStarted","Data":"7eda8e95a36077a313a814115829f0df28ffa033671f6302114e1fa6599254ea"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.556023 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5","Type":"ContainerStarted","Data":"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.556070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5","Type":"ContainerStarted","Data":"3803806226cc3d13aaeade202ee9fbb19ba2a6d2244f62bb581c5e6ccab8651b"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.558854 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerStarted","Data":"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.558916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerStarted","Data":"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.558931 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerStarted","Data":"0a3fff8e421334790d9c1a1fa6aef12e833b21221f443194adf5fb4bdb276eb8"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.577385 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerStarted","Data":"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.577432 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerStarted","Data":"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.577737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerStarted","Data":"cfc012c8cd69386433070af273154e69961e0df6d420b6d6536c97972bb11555"} Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.580707 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.580680293 podStartE2EDuration="2.580680293s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:40.575093479 +0000 UTC m=+5443.318750928" watchObservedRunningTime="2026-02-19 09:51:40.580680293 +0000 UTC m=+5443.324337742" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.653874 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.653853056 podStartE2EDuration="2.653853056s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:40.636459148 +0000 UTC m=+5443.380116617" watchObservedRunningTime="2026-02-19 09:51:40.653853056 +0000 UTC m=+5443.397510505" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.694339 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.694306198 podStartE2EDuration="2.694306198s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:40.661656607 +0000 UTC m=+5443.405314056" watchObservedRunningTime="2026-02-19 09:51:40.694306198 +0000 UTC m=+5443.437963667" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.721030 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cthrb" podStartSLOduration=2.721003875 podStartE2EDuration="2.721003875s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:40.688931249 +0000 UTC m=+5443.432588698" watchObservedRunningTime="2026-02-19 09:51:40.721003875 +0000 UTC m=+5443.464661324" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.730721 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.730704014 podStartE2EDuration="2.730704014s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:40.716071728 +0000 UTC m=+5443.459729177" watchObservedRunningTime="2026-02-19 09:51:40.730704014 +0000 UTC m=+5443.474361463" Feb 19 09:51:40 crc kubenswrapper[4780]: I0219 09:51:40.786794 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxgjt"] Feb 19 09:51:40 crc kubenswrapper[4780]: W0219 09:51:40.795256 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3c9b76e_da33_4c48_8fb6_2b2c461007c7.slice/crio-a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d WatchSource:0}: Error finding container a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d: Status 404 returned error can't find the container with id a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d Feb 19 09:51:41 crc kubenswrapper[4780]: I0219 09:51:41.589504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" event={"ID":"8e6a81ab-c7b2-429e-a438-09c4268ceee2","Type":"ContainerStarted","Data":"5c0b0a1f514282c43c570dd2259f7aa329151f8e6f8c8064fc6792afa781ebc5"} Feb 19 09:51:41 crc kubenswrapper[4780]: I0219 09:51:41.590824 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:41 crc kubenswrapper[4780]: I0219 09:51:41.591921 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" event={"ID":"b3c9b76e-da33-4c48-8fb6-2b2c461007c7","Type":"ContainerStarted","Data":"09c807cb2c88b53ca9cfef415d40ce92a2eecbbbaa908a706ceae74005d48a69"} Feb 19 09:51:41 crc kubenswrapper[4780]: I0219 09:51:41.591977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" event={"ID":"b3c9b76e-da33-4c48-8fb6-2b2c461007c7","Type":"ContainerStarted","Data":"a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d"} Feb 19 09:51:41 crc kubenswrapper[4780]: I0219 09:51:41.615465 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" podStartSLOduration=3.61544326 podStartE2EDuration="3.61544326s" podCreationTimestamp="2026-02-19 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:41.609456306 +0000 UTC m=+5444.353113755" watchObservedRunningTime="2026-02-19 09:51:41.61544326 +0000 UTC m=+5444.359100709" Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.147106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.147928 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.203989 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.294365 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.622241 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3c9b76e-da33-4c48-8fb6-2b2c461007c7" containerID="09c807cb2c88b53ca9cfef415d40ce92a2eecbbbaa908a706ceae74005d48a69" exitCode=0 Feb 19 09:51:44 crc kubenswrapper[4780]: I0219 09:51:44.622289 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" event={"ID":"b3c9b76e-da33-4c48-8fb6-2b2c461007c7","Type":"ContainerDied","Data":"09c807cb2c88b53ca9cfef415d40ce92a2eecbbbaa908a706ceae74005d48a69"} Feb 19 09:51:45 crc kubenswrapper[4780]: I0219 09:51:45.636249 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f78c83f-0497-4f37-bd31-e73228b93e78" containerID="7eda8e95a36077a313a814115829f0df28ffa033671f6302114e1fa6599254ea" exitCode=0 Feb 19 09:51:45 crc kubenswrapper[4780]: I0219 09:51:45.636580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cthrb" event={"ID":"4f78c83f-0497-4f37-bd31-e73228b93e78","Type":"ContainerDied","Data":"7eda8e95a36077a313a814115829f0df28ffa033671f6302114e1fa6599254ea"} Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.036098 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.088943 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9cc5\" (UniqueName: \"kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5\") pod \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.089014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts\") pod \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.089143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle\") pod \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.089205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data\") pod \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\" (UID: \"b3c9b76e-da33-4c48-8fb6-2b2c461007c7\") " Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.094909 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts" (OuterVolumeSpecName: "scripts") pod "b3c9b76e-da33-4c48-8fb6-2b2c461007c7" (UID: "b3c9b76e-da33-4c48-8fb6-2b2c461007c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.095654 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5" (OuterVolumeSpecName: "kube-api-access-f9cc5") pod "b3c9b76e-da33-4c48-8fb6-2b2c461007c7" (UID: "b3c9b76e-da33-4c48-8fb6-2b2c461007c7"). InnerVolumeSpecName "kube-api-access-f9cc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.128974 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data" (OuterVolumeSpecName: "config-data") pod "b3c9b76e-da33-4c48-8fb6-2b2c461007c7" (UID: "b3c9b76e-da33-4c48-8fb6-2b2c461007c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.128992 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c9b76e-da33-4c48-8fb6-2b2c461007c7" (UID: "b3c9b76e-da33-4c48-8fb6-2b2c461007c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.192376 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.192428 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.192447 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9cc5\" (UniqueName: \"kubernetes.io/projected/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-kube-api-access-f9cc5\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.192469 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c9b76e-da33-4c48-8fb6-2b2c461007c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.653743 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.653947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxgjt" event={"ID":"b3c9b76e-da33-4c48-8fb6-2b2c461007c7","Type":"ContainerDied","Data":"a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d"} Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.662405 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7206bc35f8d8987509aa1def4c3980272f5d7a45de480435cbdc7fd5ae3291d" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.746326 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:51:46 crc kubenswrapper[4780]: E0219 09:51:46.747565 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c9b76e-da33-4c48-8fb6-2b2c461007c7" containerName="nova-cell1-conductor-db-sync" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.747588 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c9b76e-da33-4c48-8fb6-2b2c461007c7" containerName="nova-cell1-conductor-db-sync" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.747765 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c9b76e-da33-4c48-8fb6-2b2c461007c7" containerName="nova-cell1-conductor-db-sync" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.749774 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.752657 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.778940 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.851571 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.852014 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.852175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqqx\" (UniqueName: \"kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.956203 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.956694 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.956785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqqx\" (UniqueName: \"kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.964850 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.978432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:46 crc kubenswrapper[4780]: I0219 09:51:46.978854 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqqx\" (UniqueName: \"kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx\") pod \"nova-cell1-conductor-0\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.070627 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.161070 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.263529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data\") pod \"4f78c83f-0497-4f37-bd31-e73228b93e78\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.264183 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle\") pod \"4f78c83f-0497-4f37-bd31-e73228b93e78\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.264250 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9rmv\" (UniqueName: \"kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv\") pod \"4f78c83f-0497-4f37-bd31-e73228b93e78\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.264292 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts\") pod \"4f78c83f-0497-4f37-bd31-e73228b93e78\" (UID: \"4f78c83f-0497-4f37-bd31-e73228b93e78\") " Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.284341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv" (OuterVolumeSpecName: "kube-api-access-g9rmv") pod "4f78c83f-0497-4f37-bd31-e73228b93e78" (UID: "4f78c83f-0497-4f37-bd31-e73228b93e78"). InnerVolumeSpecName "kube-api-access-g9rmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.297138 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f78c83f-0497-4f37-bd31-e73228b93e78" (UID: "4f78c83f-0497-4f37-bd31-e73228b93e78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.298268 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts" (OuterVolumeSpecName: "scripts") pod "4f78c83f-0497-4f37-bd31-e73228b93e78" (UID: "4f78c83f-0497-4f37-bd31-e73228b93e78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.308352 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data" (OuterVolumeSpecName: "config-data") pod "4f78c83f-0497-4f37-bd31-e73228b93e78" (UID: "4f78c83f-0497-4f37-bd31-e73228b93e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.366675 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.366714 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.366732 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9rmv\" (UniqueName: \"kubernetes.io/projected/4f78c83f-0497-4f37-bd31-e73228b93e78-kube-api-access-g9rmv\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.366746 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f78c83f-0497-4f37-bd31-e73228b93e78-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.533633 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:51:47 crc kubenswrapper[4780]: W0219 09:51:47.540542 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8d588b_2078_4e44_bcd5_3d31116fc462.slice/crio-7424fa027693fdb5d43c618ad19252daca9889732b92503ca9f16b95c57902b7 WatchSource:0}: Error finding container 7424fa027693fdb5d43c618ad19252daca9889732b92503ca9f16b95c57902b7: Status 404 returned error can't find the container with id 7424fa027693fdb5d43c618ad19252daca9889732b92503ca9f16b95c57902b7 Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.662526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac8d588b-2078-4e44-bcd5-3d31116fc462","Type":"ContainerStarted","Data":"7424fa027693fdb5d43c618ad19252daca9889732b92503ca9f16b95c57902b7"} Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.664605 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cthrb" event={"ID":"4f78c83f-0497-4f37-bd31-e73228b93e78","Type":"ContainerDied","Data":"cc3994f0f614954d2704e9c818d1204cd4300a03481ad42779d5a259129788ad"} Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.664659 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc3994f0f614954d2704e9c818d1204cd4300a03481ad42779d5a259129788ad" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.664718 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cthrb" Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.837869 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.838668 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-log" containerID="cri-o://203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" gracePeriod=30 Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.838744 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-api" containerID="cri-o://c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" gracePeriod=30 Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.854538 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.854791 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" containerName="nova-scheduler-scheduler" containerID="cri-o://56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f" gracePeriod=30 Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.877700 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.878024 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-log" containerID="cri-o://fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" gracePeriod=30 Feb 19 09:51:47 crc kubenswrapper[4780]: I0219 09:51:47.878208 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-metadata" containerID="cri-o://233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" gracePeriod=30 Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.484827 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.519924 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.591246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle\") pod \"f193d65d-51c9-46e8-b850-bf0caecf7b00\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.591402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qmt2\" (UniqueName: \"kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2\") pod \"f193d65d-51c9-46e8-b850-bf0caecf7b00\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.591476 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs\") pod \"f193d65d-51c9-46e8-b850-bf0caecf7b00\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.591492 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data\") pod \"f193d65d-51c9-46e8-b850-bf0caecf7b00\" (UID: \"f193d65d-51c9-46e8-b850-bf0caecf7b00\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.592456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs" (OuterVolumeSpecName: "logs") pod "f193d65d-51c9-46e8-b850-bf0caecf7b00" (UID: "f193d65d-51c9-46e8-b850-bf0caecf7b00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.596341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2" (OuterVolumeSpecName: "kube-api-access-4qmt2") pod "f193d65d-51c9-46e8-b850-bf0caecf7b00" (UID: "f193d65d-51c9-46e8-b850-bf0caecf7b00"). InnerVolumeSpecName "kube-api-access-4qmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.617403 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data" (OuterVolumeSpecName: "config-data") pod "f193d65d-51c9-46e8-b850-bf0caecf7b00" (UID: "f193d65d-51c9-46e8-b850-bf0caecf7b00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.620799 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f193d65d-51c9-46e8-b850-bf0caecf7b00" (UID: "f193d65d-51c9-46e8-b850-bf0caecf7b00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680203 4780 generic.go:334] "Generic (PLEG): container finished" podID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerID="233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" exitCode=0 Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680234 4780 generic.go:334] "Generic (PLEG): container finished" podID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerID="fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" exitCode=143 Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerDied","Data":"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680290 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerDied","Data":"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680331 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b167d31-18d2-4193-a336-586e3c7e2b05","Type":"ContainerDied","Data":"0a3fff8e421334790d9c1a1fa6aef12e833b21221f443194adf5fb4bdb276eb8"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.680347 4780 scope.go:117] "RemoveContainer" containerID="233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683734 4780 generic.go:334] "Generic (PLEG): container finished" podID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerID="c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" exitCode=0 Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683754 4780 generic.go:334] "Generic (PLEG): container finished" podID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerID="203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" exitCode=143 Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerDied","Data":"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerDied","Data":"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f193d65d-51c9-46e8-b850-bf0caecf7b00","Type":"ContainerDied","Data":"cfc012c8cd69386433070af273154e69961e0df6d420b6d6536c97972bb11555"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.683887 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.687466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac8d588b-2078-4e44-bcd5-3d31116fc462","Type":"ContainerStarted","Data":"62c2f68ca215a1f039883ee42ca809a1866ec3496f78c274727a2c331858952d"} Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.687659 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.692897 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7crvr\" (UniqueName: \"kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr\") pod \"3b167d31-18d2-4193-a336-586e3c7e2b05\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.692981 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data\") pod \"3b167d31-18d2-4193-a336-586e3c7e2b05\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693102 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs\") pod \"3b167d31-18d2-4193-a336-586e3c7e2b05\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693155 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle\") pod \"3b167d31-18d2-4193-a336-586e3c7e2b05\" (UID: \"3b167d31-18d2-4193-a336-586e3c7e2b05\") " Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693487 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs" (OuterVolumeSpecName: "logs") pod "3b167d31-18d2-4193-a336-586e3c7e2b05" (UID: "3b167d31-18d2-4193-a336-586e3c7e2b05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693771 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b167d31-18d2-4193-a336-586e3c7e2b05-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693787 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693798 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qmt2\" (UniqueName: \"kubernetes.io/projected/f193d65d-51c9-46e8-b850-bf0caecf7b00-kube-api-access-4qmt2\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693807 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f193d65d-51c9-46e8-b850-bf0caecf7b00-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.693816 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f193d65d-51c9-46e8-b850-bf0caecf7b00-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.696522 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr" (OuterVolumeSpecName: "kube-api-access-7crvr") pod "3b167d31-18d2-4193-a336-586e3c7e2b05" (UID: "3b167d31-18d2-4193-a336-586e3c7e2b05"). InnerVolumeSpecName "kube-api-access-7crvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.708714 4780 scope.go:117] "RemoveContainer" containerID="fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.713853 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.713831426 podStartE2EDuration="2.713831426s" podCreationTimestamp="2026-02-19 09:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:48.703778057 +0000 UTC m=+5451.447435526" watchObservedRunningTime="2026-02-19 09:51:48.713831426 +0000 UTC m=+5451.457488875" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.735234 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.747901 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data" (OuterVolumeSpecName: "config-data") pod "3b167d31-18d2-4193-a336-586e3c7e2b05" (UID: "3b167d31-18d2-4193-a336-586e3c7e2b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.748354 4780 scope.go:117] "RemoveContainer" containerID="233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.752070 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf\": container with ID starting with 233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf not found: ID does not exist" containerID="233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752135 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf"} err="failed to get container status \"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf\": rpc error: code = NotFound desc = could not find container \"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf\": container with ID starting with 233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752163 4780 scope.go:117] "RemoveContainer" containerID="fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.752447 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb\": container with ID starting with fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb not found: ID does not exist" containerID="fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752461 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb"} err="failed to get container status \"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb\": rpc error: code = NotFound desc = could not find container \"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb\": container with ID starting with fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752473 4780 scope.go:117] "RemoveContainer" containerID="233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752833 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf"} err="failed to get container status \"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf\": rpc error: code = NotFound desc = could not find container \"233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf\": container with ID starting with 233fce52860574b78178f1858a7b092f9ef48e4fa49653956f6ac2684c9fa5cf not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.752849 4780 scope.go:117] "RemoveContainer" containerID="fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.753238 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb"} err="failed to get container status \"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb\": rpc error: code = NotFound desc = could not find container \"fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb\": container with ID starting with fe4f254676a597b9aae016fd93a6a817def7c4ea88a48897e787839c90373efb not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.753256 4780 scope.go:117] "RemoveContainer" containerID="c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.762098 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b167d31-18d2-4193-a336-586e3c7e2b05" (UID: "3b167d31-18d2-4193-a336-586e3c7e2b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.767225 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.781482 4780 scope.go:117] "RemoveContainer" containerID="203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792409 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.792848 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-log" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792866 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-log" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.792883 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-api" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792891 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-api" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.792914 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f78c83f-0497-4f37-bd31-e73228b93e78" containerName="nova-manage" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792921 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f78c83f-0497-4f37-bd31-e73228b93e78" containerName="nova-manage" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.792935 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-metadata" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792941 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-metadata" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.792951 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-log" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.792958 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-log" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.793162 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f78c83f-0497-4f37-bd31-e73228b93e78" containerName="nova-manage" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.793177 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-api" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.793191 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" containerName="nova-api-log" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.793207 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-metadata" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.793216 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" containerName="nova-metadata-log" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.794303 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.795576 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7crvr\" (UniqueName: \"kubernetes.io/projected/3b167d31-18d2-4193-a336-586e3c7e2b05-kube-api-access-7crvr\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.795622 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.795637 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b167d31-18d2-4193-a336-586e3c7e2b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.797845 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.802840 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.806921 4780 scope.go:117] "RemoveContainer" containerID="c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.810894 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42\": container with ID starting with c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42 not found: ID does not exist" containerID="c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.810948 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42"} err="failed to get container status \"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42\": rpc error: code = NotFound desc = could not find container \"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42\": container with ID starting with c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42 not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.810978 4780 scope.go:117] "RemoveContainer" containerID="203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" Feb 19 09:51:48 crc kubenswrapper[4780]: E0219 09:51:48.812156 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419\": container with ID starting with 203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419 not found: ID does not exist" containerID="203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.812203 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419"} err="failed to get container status \"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419\": rpc error: code = NotFound desc = could not find container \"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419\": container with ID starting with 203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419 not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.812263 4780 scope.go:117] "RemoveContainer" containerID="c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.812761 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42"} err="failed to get container status \"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42\": rpc error: code = NotFound desc = could not find container \"c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42\": container with ID starting with c196741942fa983a3c2dfebd54049ced926b136b126a6db6ac0401dd37a96a42 not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.812786 4780 scope.go:117] "RemoveContainer" containerID="203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.813179 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419"} err="failed to get container status \"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419\": rpc error: code = NotFound desc = could not find container \"203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419\": container with ID starting with 203dbbef112dadcf80270c8cc9ede747d80b8ab104812e9cdac4a8076f646419 not found: ID does not exist" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.897061 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.897387 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.897632 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.897687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562qh\" (UniqueName: \"kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.998968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.999155 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562qh\" (UniqueName: \"kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.999223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:48 crc kubenswrapper[4780]: I0219 09:51:48.999306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.000118 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.006074 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.006864 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.021625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562qh\" (UniqueName: \"kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh\") pod \"nova-api-0\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.115796 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.122002 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.135344 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.151330 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.154678 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.161720 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.164528 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.204412 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.236390 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.238297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.335732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.336593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.336804 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8n6\" (UniqueName: \"kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.336902 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.379329 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.379625 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="dnsmasq-dns" containerID="cri-o://d234a4ca9a722a474abdc8be312eb1abaa6d338c2addd9bb0fcef3e033d50386" gracePeriod=10 Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.438869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.438983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.439056 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8n6\" (UniqueName: \"kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.439100 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.440576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.446114 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.448917 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.459516 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8n6\" (UniqueName: \"kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6\") pod \"nova-metadata-0\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.549300 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.680389 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.700105 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerID="d234a4ca9a722a474abdc8be312eb1abaa6d338c2addd9bb0fcef3e033d50386" exitCode=0 Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.700179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" event={"ID":"2ad88047-8ace-4ac4-abbd-598c0ed48c26","Type":"ContainerDied","Data":"d234a4ca9a722a474abdc8be312eb1abaa6d338c2addd9bb0fcef3e033d50386"} Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.731427 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.952025 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b167d31-18d2-4193-a336-586e3c7e2b05" path="/var/lib/kubelet/pods/3b167d31-18d2-4193-a336-586e3c7e2b05/volumes" Feb 19 09:51:49 crc kubenswrapper[4780]: I0219 09:51:49.952870 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f193d65d-51c9-46e8-b850-bf0caecf7b00" path="/var/lib/kubelet/pods/f193d65d-51c9-46e8-b850-bf0caecf7b00/volumes" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.073828 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.398742 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.566161 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf94g\" (UniqueName: \"kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g\") pod \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.566232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb\") pod \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.566381 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc\") pod \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.566434 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb\") pod \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.566455 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config\") pod \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\" (UID: \"2ad88047-8ace-4ac4-abbd-598c0ed48c26\") " Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.571651 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g" (OuterVolumeSpecName: "kube-api-access-kf94g") pod "2ad88047-8ace-4ac4-abbd-598c0ed48c26" (UID: "2ad88047-8ace-4ac4-abbd-598c0ed48c26"). InnerVolumeSpecName "kube-api-access-kf94g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.613668 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ad88047-8ace-4ac4-abbd-598c0ed48c26" (UID: "2ad88047-8ace-4ac4-abbd-598c0ed48c26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.613849 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ad88047-8ace-4ac4-abbd-598c0ed48c26" (UID: "2ad88047-8ace-4ac4-abbd-598c0ed48c26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.615062 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config" (OuterVolumeSpecName: "config") pod "2ad88047-8ace-4ac4-abbd-598c0ed48c26" (UID: "2ad88047-8ace-4ac4-abbd-598c0ed48c26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.617386 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ad88047-8ace-4ac4-abbd-598c0ed48c26" (UID: "2ad88047-8ace-4ac4-abbd-598c0ed48c26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.668615 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.668643 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.668656 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf94g\" (UniqueName: \"kubernetes.io/projected/2ad88047-8ace-4ac4-abbd-598c0ed48c26-kube-api-access-kf94g\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.668666 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.668674 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad88047-8ace-4ac4-abbd-598c0ed48c26-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.727091 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" event={"ID":"2ad88047-8ace-4ac4-abbd-598c0ed48c26","Type":"ContainerDied","Data":"12601749dd4c7800d78d470b21d8bb3295ce8682c0d59f742a3ab8062f616bf1"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.727161 4780 scope.go:117] "RemoveContainer" containerID="d234a4ca9a722a474abdc8be312eb1abaa6d338c2addd9bb0fcef3e033d50386" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.727186 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c8d5b6c-gb9ph" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.733403 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerStarted","Data":"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.733453 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerStarted","Data":"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.733468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerStarted","Data":"a3f3ab955244a2f79bb52229f59afa9c8fb6df93a1fd0651d627a9bf3b32ae74"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.740390 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerStarted","Data":"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.740424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerStarted","Data":"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.740438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerStarted","Data":"f4b903665b4632e69fe1297646af80036d0b086cbb56568f962b6e509b2fae54"} Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.749329 4780 scope.go:117] "RemoveContainer" containerID="4ee7917106c6ee2236e4526e34ac024636ab92d9402e0314f43896e39129ad7a" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.759640 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.759614439 podStartE2EDuration="1.759614439s" podCreationTimestamp="2026-02-19 09:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:50.750678209 +0000 UTC m=+5453.494335668" watchObservedRunningTime="2026-02-19 09:51:50.759614439 +0000 UTC m=+5453.503271898" Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.791280 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.798321 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c8d5b6c-gb9ph"] Feb 19 09:51:50 crc kubenswrapper[4780]: I0219 09:51:50.800562 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.800552553 podStartE2EDuration="2.800552553s" podCreationTimestamp="2026-02-19 09:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:50.783697609 +0000 UTC m=+5453.527355068" watchObservedRunningTime="2026-02-19 09:51:50.800552553 +0000 UTC m=+5453.544210002" Feb 19 09:51:51 crc kubenswrapper[4780]: I0219 09:51:51.956912 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" path="/var/lib/kubelet/pods/2ad88047-8ace-4ac4-abbd-598c0ed48c26/volumes" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.101009 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.223924 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.416151 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle\") pod \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.416535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data\") pod \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.416603 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzwk\" (UniqueName: \"kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk\") pod \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\" (UID: \"4c5af5e1-ec66-48de-bdd0-1bd26eb64016\") " Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.422307 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk" (OuterVolumeSpecName: "kube-api-access-jmzwk") pod "4c5af5e1-ec66-48de-bdd0-1bd26eb64016" (UID: "4c5af5e1-ec66-48de-bdd0-1bd26eb64016"). InnerVolumeSpecName "kube-api-access-jmzwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.446407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c5af5e1-ec66-48de-bdd0-1bd26eb64016" (UID: "4c5af5e1-ec66-48de-bdd0-1bd26eb64016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.448787 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data" (OuterVolumeSpecName: "config-data") pod "4c5af5e1-ec66-48de-bdd0-1bd26eb64016" (UID: "4c5af5e1-ec66-48de-bdd0-1bd26eb64016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.519889 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.519924 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.519934 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzwk\" (UniqueName: \"kubernetes.io/projected/4c5af5e1-ec66-48de-bdd0-1bd26eb64016-kube-api-access-jmzwk\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.615228 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dqbvj"] Feb 19 09:51:52 crc kubenswrapper[4780]: E0219 09:51:52.615705 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="init" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.615729 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="init" Feb 19 09:51:52 crc kubenswrapper[4780]: E0219 09:51:52.615751 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" containerName="nova-scheduler-scheduler" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.615759 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" containerName="nova-scheduler-scheduler" Feb 19 09:51:52 crc kubenswrapper[4780]: E0219 09:51:52.615793 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="dnsmasq-dns" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.615803 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="dnsmasq-dns" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.616074 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" containerName="nova-scheduler-scheduler" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.616106 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad88047-8ace-4ac4-abbd-598c0ed48c26" containerName="dnsmasq-dns" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.616883 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.619378 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.622266 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.644658 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqbvj"] Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.723586 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.723628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.723649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4htj\" (UniqueName: \"kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.723728 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.758454 4780 generic.go:334] "Generic (PLEG): container finished" podID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" containerID="56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f" exitCode=0 Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.758496 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5af5e1-ec66-48de-bdd0-1bd26eb64016","Type":"ContainerDied","Data":"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f"} Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.758524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5af5e1-ec66-48de-bdd0-1bd26eb64016","Type":"ContainerDied","Data":"b69a93dadcdb6310ce90f3a48ac61c3001a9ea532eb849c9e6d76225188cce3c"} Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.758539 4780 scope.go:117] "RemoveContainer" containerID="56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.758650 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.789166 4780 scope.go:117] "RemoveContainer" containerID="56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f" Feb 19 09:51:52 crc kubenswrapper[4780]: E0219 09:51:52.789720 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f\": container with ID starting with 56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f not found: ID does not exist" containerID="56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.789761 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f"} err="failed to get container status \"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f\": rpc error: code = NotFound desc = could not find container \"56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f\": container with ID starting with 56c6e2d3b650a6439aa32fb9ab490b70a6d40a526457e760774cdcad27ee6a1f not found: ID does not exist" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.815511 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.825908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.826274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.826394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4htj\" (UniqueName: \"kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.826647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.830962 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.836177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.836395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.850380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4htj\" (UniqueName: \"kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.850469 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.851910 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.858044 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data\") pod \"nova-cell1-cell-mapping-dqbvj\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.858587 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.862548 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:52 crc kubenswrapper[4780]: I0219 09:51:52.940334 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.030671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.030813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.031101 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44dxj\" (UniqueName: \"kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.133910 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.134000 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44dxj\" (UniqueName: \"kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.134143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.138793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.138904 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.149466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44dxj\" (UniqueName: \"kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj\") pod \"nova-scheduler-0\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.190681 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.211427 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqbvj"] Feb 19 09:51:53 crc kubenswrapper[4780]: W0219 09:51:53.746709 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cab20f6_f252_4a8b_ad84_73fb9d0c9d80.slice/crio-103fc71ab0e061e835d4eeb23508089bdb70cd0d95feaf5c29637b8852e724d7 WatchSource:0}: Error finding container 103fc71ab0e061e835d4eeb23508089bdb70cd0d95feaf5c29637b8852e724d7: Status 404 returned error can't find the container with id 103fc71ab0e061e835d4eeb23508089bdb70cd0d95feaf5c29637b8852e724d7 Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.749412 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.771036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80","Type":"ContainerStarted","Data":"103fc71ab0e061e835d4eeb23508089bdb70cd0d95feaf5c29637b8852e724d7"} Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.776111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqbvj" event={"ID":"b3da8aec-25ad-4017-bb05-6b87fa4f359a","Type":"ContainerStarted","Data":"3711dda938dee06ddd9ae46f83b57c758dd7fccef8194dd1d99fcb8c66adf618"} Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.776179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqbvj" event={"ID":"b3da8aec-25ad-4017-bb05-6b87fa4f359a","Type":"ContainerStarted","Data":"9d5c9b731ff9faadbf309e34164c2ca116f6123287cb4a0ba3f3f1273bc6fac5"} Feb 19 09:51:53 crc kubenswrapper[4780]: I0219 09:51:53.954414 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5af5e1-ec66-48de-bdd0-1bd26eb64016" path="/var/lib/kubelet/pods/4c5af5e1-ec66-48de-bdd0-1bd26eb64016/volumes" Feb 19 09:51:54 crc kubenswrapper[4780]: I0219 09:51:54.549947 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:51:54 crc kubenswrapper[4780]: I0219 09:51:54.550017 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:51:54 crc kubenswrapper[4780]: I0219 09:51:54.797268 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80","Type":"ContainerStarted","Data":"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f"} Feb 19 09:51:54 crc kubenswrapper[4780]: I0219 09:51:54.825920 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dqbvj" podStartSLOduration=2.8258961830000002 podStartE2EDuration="2.825896183s" podCreationTimestamp="2026-02-19 09:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:53.797766027 +0000 UTC m=+5456.541423516" watchObservedRunningTime="2026-02-19 09:51:54.825896183 +0000 UTC m=+5457.569553632" Feb 19 09:51:54 crc kubenswrapper[4780]: I0219 09:51:54.840177 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.84015077 podStartE2EDuration="2.84015077s" podCreationTimestamp="2026-02-19 09:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:54.824356013 +0000 UTC m=+5457.568013502" watchObservedRunningTime="2026-02-19 09:51:54.84015077 +0000 UTC m=+5457.583808229" Feb 19 09:51:58 crc kubenswrapper[4780]: I0219 09:51:58.190764 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 09:51:58 crc kubenswrapper[4780]: I0219 09:51:58.839557 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3da8aec-25ad-4017-bb05-6b87fa4f359a" containerID="3711dda938dee06ddd9ae46f83b57c758dd7fccef8194dd1d99fcb8c66adf618" exitCode=0 Feb 19 09:51:58 crc kubenswrapper[4780]: I0219 09:51:58.839610 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqbvj" event={"ID":"b3da8aec-25ad-4017-bb05-6b87fa4f359a","Type":"ContainerDied","Data":"3711dda938dee06ddd9ae46f83b57c758dd7fccef8194dd1d99fcb8c66adf618"} Feb 19 09:51:59 crc kubenswrapper[4780]: I0219 09:51:59.117227 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:51:59 crc kubenswrapper[4780]: I0219 09:51:59.117544 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:51:59 crc kubenswrapper[4780]: I0219 09:51:59.550082 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:51:59 crc kubenswrapper[4780]: I0219 09:51:59.550195 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.199365 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.199562 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.246739 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.294329 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts\") pod \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.294403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data\") pod \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.294443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4htj\" (UniqueName: \"kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj\") pod \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.294614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle\") pod \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\" (UID: \"b3da8aec-25ad-4017-bb05-6b87fa4f359a\") " Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.303884 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts" (OuterVolumeSpecName: "scripts") pod "b3da8aec-25ad-4017-bb05-6b87fa4f359a" (UID: "b3da8aec-25ad-4017-bb05-6b87fa4f359a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.313022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj" (OuterVolumeSpecName: "kube-api-access-l4htj") pod "b3da8aec-25ad-4017-bb05-6b87fa4f359a" (UID: "b3da8aec-25ad-4017-bb05-6b87fa4f359a"). InnerVolumeSpecName "kube-api-access-l4htj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.322426 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data" (OuterVolumeSpecName: "config-data") pod "b3da8aec-25ad-4017-bb05-6b87fa4f359a" (UID: "b3da8aec-25ad-4017-bb05-6b87fa4f359a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.323717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3da8aec-25ad-4017-bb05-6b87fa4f359a" (UID: "b3da8aec-25ad-4017-bb05-6b87fa4f359a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.396262 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.396296 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.396373 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4htj\" (UniqueName: \"kubernetes.io/projected/b3da8aec-25ad-4017-bb05-6b87fa4f359a-kube-api-access-l4htj\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.396387 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3da8aec-25ad-4017-bb05-6b87fa4f359a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.632524 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.633605 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.859565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqbvj" event={"ID":"b3da8aec-25ad-4017-bb05-6b87fa4f359a","Type":"ContainerDied","Data":"9d5c9b731ff9faadbf309e34164c2ca116f6123287cb4a0ba3f3f1273bc6fac5"} Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.859617 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5c9b731ff9faadbf309e34164c2ca116f6123287cb4a0ba3f3f1273bc6fac5" Feb 19 09:52:00 crc kubenswrapper[4780]: I0219 09:52:00.859683 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqbvj" Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.076942 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.077953 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-log" containerID="cri-o://ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62" gracePeriod=30 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.078033 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-api" containerID="cri-o://a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757" gracePeriod=30 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.125162 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.125510 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-metadata" containerID="cri-o://139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15" gracePeriod=30 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.125420 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-log" containerID="cri-o://1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85" gracePeriod=30 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.211780 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.212005 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" containerName="nova-scheduler-scheduler" containerID="cri-o://a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f" gracePeriod=30 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.868871 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerID="1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85" exitCode=143 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.868939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerDied","Data":"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85"} Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.870211 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerID="ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62" exitCode=143 Feb 19 09:52:01 crc kubenswrapper[4780]: I0219 09:52:01.870243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerDied","Data":"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.368522 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.482152 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data\") pod \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.482222 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44dxj\" (UniqueName: \"kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj\") pod \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.482363 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle\") pod \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\" (UID: \"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80\") " Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.499954 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj" (OuterVolumeSpecName: "kube-api-access-44dxj") pod "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" (UID: "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80"). InnerVolumeSpecName "kube-api-access-44dxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.509522 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" (UID: "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.514242 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data" (OuterVolumeSpecName: "config-data") pod "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" (UID: "0cab20f6-f252-4a8b-ad84-73fb9d0c9d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.585408 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.585452 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.585468 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44dxj\" (UniqueName: \"kubernetes.io/projected/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80-kube-api-access-44dxj\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.629540 4780 scope.go:117] "RemoveContainer" containerID="9c94caff25a36c45edac07f3b6711757ea87257ee87816d7a6b6d09c0b189f5b" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.820376 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.866963 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.907084 4780 generic.go:334] "Generic (PLEG): container finished" podID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" containerID="a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f" exitCode=0 Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.907187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80","Type":"ContainerDied","Data":"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.907222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cab20f6-f252-4a8b-ad84-73fb9d0c9d80","Type":"ContainerDied","Data":"103fc71ab0e061e835d4eeb23508089bdb70cd0d95feaf5c29637b8852e724d7"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.907226 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.907245 4780 scope.go:117] "RemoveContainer" containerID="a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.911810 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerID="a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757" exitCode=0 Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.911877 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.911891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerDied","Data":"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.911924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f060c4b-4164-49cf-816e-f80f9116ea8f","Type":"ContainerDied","Data":"f4b903665b4632e69fe1297646af80036d0b086cbb56568f962b6e509b2fae54"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.915203 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerID="139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15" exitCode=0 Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.915228 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.915263 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerDied","Data":"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.915990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad91c81b-3889-413d-93b2-f57d5d8201c2","Type":"ContainerDied","Data":"a3f3ab955244a2f79bb52229f59afa9c8fb6df93a1fd0651d627a9bf3b32ae74"} Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.932234 4780 scope.go:117] "RemoveContainer" containerID="a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f" Feb 19 09:52:05 crc kubenswrapper[4780]: E0219 09:52:05.936173 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f\": container with ID starting with a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f not found: ID does not exist" containerID="a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.936225 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f"} err="failed to get container status \"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f\": rpc error: code = NotFound desc = could not find container \"a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f\": container with ID starting with a453dfeb3fa62ab5a0641963244b59e185455e9e65b5099e2531cf18d570354f not found: ID does not exist" Feb 19 09:52:05 crc kubenswrapper[4780]: I0219 09:52:05.936284 4780 scope.go:117] "RemoveContainer" containerID="a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.005789 4780 scope.go:117] "RemoveContainer" containerID="ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013058 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs\") pod \"4f060c4b-4164-49cf-816e-f80f9116ea8f\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013142 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data\") pod \"ad91c81b-3889-413d-93b2-f57d5d8201c2\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013220 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle\") pod \"ad91c81b-3889-413d-93b2-f57d5d8201c2\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013247 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data\") pod \"4f060c4b-4164-49cf-816e-f80f9116ea8f\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013292 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr8n6\" (UniqueName: \"kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6\") pod \"ad91c81b-3889-413d-93b2-f57d5d8201c2\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013372 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs\") pod \"ad91c81b-3889-413d-93b2-f57d5d8201c2\" (UID: \"ad91c81b-3889-413d-93b2-f57d5d8201c2\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-562qh\" (UniqueName: \"kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh\") pod \"4f060c4b-4164-49cf-816e-f80f9116ea8f\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.013486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle\") pod \"4f060c4b-4164-49cf-816e-f80f9116ea8f\" (UID: \"4f060c4b-4164-49cf-816e-f80f9116ea8f\") " Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.028830 4780 scope.go:117] "RemoveContainer" containerID="a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.029401 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757\": container with ID starting with a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757 not found: ID does not exist" containerID="a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.029476 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757"} err="failed to get container status \"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757\": rpc error: code = NotFound desc = could not find container \"a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757\": container with ID starting with a706cb83e6ce3ac54f57424e99746e4baa66eca548dc0bf01fab4b8d0f4f0757 not found: ID does not exist" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.029514 4780 scope.go:117] "RemoveContainer" containerID="ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.029533 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs" (OuterVolumeSpecName: "logs") pod "ad91c81b-3889-413d-93b2-f57d5d8201c2" (UID: "ad91c81b-3889-413d-93b2-f57d5d8201c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.030283 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62\": container with ID starting with ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62 not found: ID does not exist" containerID="ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.030322 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62"} err="failed to get container status \"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62\": rpc error: code = NotFound desc = could not find container \"ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62\": container with ID starting with ec76b71e79cfec4101c78acca6fcf7e8fcc3082038987aebdcfd74eba820ca62 not found: ID does not exist" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.030345 4780 scope.go:117] "RemoveContainer" containerID="139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.030939 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs" (OuterVolumeSpecName: "logs") pod "4f060c4b-4164-49cf-816e-f80f9116ea8f" (UID: "4f060c4b-4164-49cf-816e-f80f9116ea8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.031754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh" (OuterVolumeSpecName: "kube-api-access-562qh") pod "4f060c4b-4164-49cf-816e-f80f9116ea8f" (UID: "4f060c4b-4164-49cf-816e-f80f9116ea8f"). InnerVolumeSpecName "kube-api-access-562qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.034796 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad91c81b-3889-413d-93b2-f57d5d8201c2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.034833 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-562qh\" (UniqueName: \"kubernetes.io/projected/4f060c4b-4164-49cf-816e-f80f9116ea8f-kube-api-access-562qh\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.034845 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f060c4b-4164-49cf-816e-f80f9116ea8f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.045195 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6" (OuterVolumeSpecName: "kube-api-access-zr8n6") pod "ad91c81b-3889-413d-93b2-f57d5d8201c2" (UID: "ad91c81b-3889-413d-93b2-f57d5d8201c2"). InnerVolumeSpecName "kube-api-access-zr8n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.048187 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad91c81b-3889-413d-93b2-f57d5d8201c2" (UID: "ad91c81b-3889-413d-93b2-f57d5d8201c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.049850 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f060c4b-4164-49cf-816e-f80f9116ea8f" (UID: "4f060c4b-4164-49cf-816e-f80f9116ea8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.052813 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data" (OuterVolumeSpecName: "config-data") pod "ad91c81b-3889-413d-93b2-f57d5d8201c2" (UID: "ad91c81b-3889-413d-93b2-f57d5d8201c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.057363 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data" (OuterVolumeSpecName: "config-data") pod "4f060c4b-4164-49cf-816e-f80f9116ea8f" (UID: "4f060c4b-4164-49cf-816e-f80f9116ea8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121434 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121475 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121492 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121874 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-metadata" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121898 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-metadata" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121924 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" containerName="nova-scheduler-scheduler" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121931 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" containerName="nova-scheduler-scheduler" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121940 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-api" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121946 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-api" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121960 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-log" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121966 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-log" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121982 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-log" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.121987 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-log" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.121998 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3da8aec-25ad-4017-bb05-6b87fa4f359a" containerName="nova-manage" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122005 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3da8aec-25ad-4017-bb05-6b87fa4f359a" containerName="nova-manage" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122191 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" containerName="nova-scheduler-scheduler" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122211 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-metadata" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122231 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" containerName="nova-metadata-log" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122241 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-log" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122248 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" containerName="nova-api-api" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.122258 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3da8aec-25ad-4017-bb05-6b87fa4f359a" containerName="nova-manage" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.123683 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.123816 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.126116 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137653 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137723 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksds5\" (UniqueName: \"kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137789 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137801 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137810 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr8n6\" (UniqueName: \"kubernetes.io/projected/ad91c81b-3889-413d-93b2-f57d5d8201c2-kube-api-access-zr8n6\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137819 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f060c4b-4164-49cf-816e-f80f9116ea8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.137828 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad91c81b-3889-413d-93b2-f57d5d8201c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.141362 4780 scope.go:117] "RemoveContainer" containerID="1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.159047 4780 scope.go:117] "RemoveContainer" containerID="139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.159373 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15\": container with ID starting with 139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15 not found: ID does not exist" containerID="139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.159404 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15"} err="failed to get container status \"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15\": rpc error: code = NotFound desc = could not find container \"139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15\": container with ID starting with 139640b5a8e291315a9e9c4a2253fd88c88f9beae8df863733450110137a1d15 not found: ID does not exist" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.159425 4780 scope.go:117] "RemoveContainer" containerID="1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85" Feb 19 09:52:06 crc kubenswrapper[4780]: E0219 09:52:06.159737 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85\": container with ID starting with 1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85 not found: ID does not exist" containerID="1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.159761 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85"} err="failed to get container status \"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85\": rpc error: code = NotFound desc = could not find container \"1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85\": container with ID starting with 1e57def53c22b289e7a993514aa4aaf5a92c7dd2b5259ca3961fa29c08c05a85 not found: ID does not exist" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.239290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.239448 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.239522 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksds5\" (UniqueName: \"kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.242214 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.245291 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.245333 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.285221 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.292870 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksds5\" (UniqueName: \"kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5\") pod \"nova-scheduler-0\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.301287 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.311638 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.322099 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.324464 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.327952 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.332252 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.333726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.335500 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.335834 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.335897 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.339510 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.351296 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.439603 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442070 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442111 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjj8\" (UniqueName: \"kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442383 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442455 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442557 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.442637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzw4k\" (UniqueName: \"kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.547538 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.547947 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjj8\" (UniqueName: \"kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548051 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.548280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzw4k\" (UniqueName: \"kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.549687 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.550006 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.551401 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.554012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.556849 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.556986 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.565313 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzw4k\" (UniqueName: \"kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k\") pod \"nova-api-0\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.571638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjj8\" (UniqueName: \"kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8\") pod \"nova-metadata-0\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.677273 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.688166 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.867641 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:52:06 crc kubenswrapper[4780]: W0219 09:52:06.873773 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podced16824_d9d2_4aea_8cb8_b50a32f1963e.slice/crio-a85c91918fee96fd4ac3eba48e33a206d8c3e32a67b604f231285571d55cf5be WatchSource:0}: Error finding container a85c91918fee96fd4ac3eba48e33a206d8c3e32a67b604f231285571d55cf5be: Status 404 returned error can't find the container with id a85c91918fee96fd4ac3eba48e33a206d8c3e32a67b604f231285571d55cf5be Feb 19 09:52:06 crc kubenswrapper[4780]: I0219 09:52:06.930115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ced16824-d9d2-4aea-8cb8-b50a32f1963e","Type":"ContainerStarted","Data":"a85c91918fee96fd4ac3eba48e33a206d8c3e32a67b604f231285571d55cf5be"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.137084 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.286906 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:52:07 crc kubenswrapper[4780]: W0219 09:52:07.289568 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod242a53ad_4ac8_4242_9928_aaed4f41057b.slice/crio-af36f2a0c277fa30de31e3ae9f481bc991965677afb5b5f371fc4ad7abec2c2e WatchSource:0}: Error finding container af36f2a0c277fa30de31e3ae9f481bc991965677afb5b5f371fc4ad7abec2c2e: Status 404 returned error can't find the container with id af36f2a0c277fa30de31e3ae9f481bc991965677afb5b5f371fc4ad7abec2c2e Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.965736 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cab20f6-f252-4a8b-ad84-73fb9d0c9d80" path="/var/lib/kubelet/pods/0cab20f6-f252-4a8b-ad84-73fb9d0c9d80/volumes" Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.967475 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f060c4b-4164-49cf-816e-f80f9116ea8f" path="/var/lib/kubelet/pods/4f060c4b-4164-49cf-816e-f80f9116ea8f/volumes" Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.971359 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad91c81b-3889-413d-93b2-f57d5d8201c2" path="/var/lib/kubelet/pods/ad91c81b-3889-413d-93b2-f57d5d8201c2/volumes" Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972669 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ced16824-d9d2-4aea-8cb8-b50a32f1963e","Type":"ContainerStarted","Data":"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerStarted","Data":"c91f4df3a7b05366cc3c6a57225821aca2e882de4b6e89b0b409e58a7e79ae1f"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972761 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerStarted","Data":"b90fcee3ace85af0cbeaffdcb5667f271e0f2e676abbaf3d21d755971c73fb39"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerStarted","Data":"af36f2a0c277fa30de31e3ae9f481bc991965677afb5b5f371fc4ad7abec2c2e"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972797 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerStarted","Data":"5aa1ecb1e620d82ffcffa56654ce35fbbfd94b5dae5324ba71a051adac823a19"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerStarted","Data":"d75772189fc915cf3ecac5e5c9aafe3adcecdd11892e1b385b4bc3197f826d85"} Feb 19 09:52:07 crc kubenswrapper[4780]: I0219 09:52:07.972835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerStarted","Data":"cad3b2c4fcb3fe5dc01b7650d665f4ef89714cb7a36a3dd1fca9e74552ae285d"} Feb 19 09:52:08 crc kubenswrapper[4780]: I0219 09:52:08.059076 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.059057742 podStartE2EDuration="2.059057742s" podCreationTimestamp="2026-02-19 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:08.051457726 +0000 UTC m=+5470.795115215" watchObservedRunningTime="2026-02-19 09:52:08.059057742 +0000 UTC m=+5470.802715191" Feb 19 09:52:08 crc kubenswrapper[4780]: I0219 09:52:08.078652 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.078621676 podStartE2EDuration="3.078621676s" podCreationTimestamp="2026-02-19 09:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:08.067812677 +0000 UTC m=+5470.811470146" watchObservedRunningTime="2026-02-19 09:52:08.078621676 +0000 UTC m=+5470.822279135" Feb 19 09:52:08 crc kubenswrapper[4780]: I0219 09:52:08.095516 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.09549257 podStartE2EDuration="2.09549257s" podCreationTimestamp="2026-02-19 09:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:08.08965176 +0000 UTC m=+5470.833309219" watchObservedRunningTime="2026-02-19 09:52:08.09549257 +0000 UTC m=+5470.839150029" Feb 19 09:52:11 crc kubenswrapper[4780]: I0219 09:52:11.440600 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 09:52:11 crc kubenswrapper[4780]: I0219 09:52:11.678225 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:52:11 crc kubenswrapper[4780]: I0219 09:52:11.678297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.440817 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.467401 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.677909 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.677978 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.688750 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:52:16 crc kubenswrapper[4780]: I0219 09:52:16.688820 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:52:17 crc kubenswrapper[4780]: I0219 09:52:17.090647 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 09:52:17 crc kubenswrapper[4780]: I0219 09:52:17.844405 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.67:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:17 crc kubenswrapper[4780]: I0219 09:52:17.844440 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.66:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:17 crc kubenswrapper[4780]: I0219 09:52:17.844406 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.66:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:17 crc kubenswrapper[4780]: I0219 09:52:17.844465 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.67:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.679915 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.682193 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.691366 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.699149 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.699485 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.699993 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 09:52:26 crc kubenswrapper[4780]: I0219 09:52:26.702720 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.156396 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.158428 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.161086 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.401857 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.403627 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.429738 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.500008 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.500057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.500495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.500686 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsv7w\" (UniqueName: \"kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.500732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.603193 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.603269 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.603354 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.603418 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsv7w\" (UniqueName: \"kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.603450 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.604356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.604391 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.604462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.604802 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.628538 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsv7w\" (UniqueName: \"kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w\") pod \"dnsmasq-dns-6896c5ffb9-mdq5f\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:27 crc kubenswrapper[4780]: I0219 09:52:27.731564 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:28 crc kubenswrapper[4780]: I0219 09:52:28.297531 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:52:29 crc kubenswrapper[4780]: I0219 09:52:29.172221 4780 generic.go:334] "Generic (PLEG): container finished" podID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerID="04e5f5443307a4420b573f9d14b7f4c451612ed6bdb25c234feec394ad4ee5fb" exitCode=0 Feb 19 09:52:29 crc kubenswrapper[4780]: I0219 09:52:29.172275 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" event={"ID":"db016b33-909a-42cc-ae7b-2112ae0a3f55","Type":"ContainerDied","Data":"04e5f5443307a4420b573f9d14b7f4c451612ed6bdb25c234feec394ad4ee5fb"} Feb 19 09:52:29 crc kubenswrapper[4780]: I0219 09:52:29.172563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" event={"ID":"db016b33-909a-42cc-ae7b-2112ae0a3f55","Type":"ContainerStarted","Data":"07e38bbb8b7fe109d7fd39155041b15bdbba221b5c20c0f1032fac59a8cca28a"} Feb 19 09:52:30 crc kubenswrapper[4780]: I0219 09:52:30.183419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" event={"ID":"db016b33-909a-42cc-ae7b-2112ae0a3f55","Type":"ContainerStarted","Data":"6cbe8e5219dfa9b4560749c5af5f9b0200c2dbb94187ab4e6a360d0cffda1c7e"} Feb 19 09:52:30 crc kubenswrapper[4780]: I0219 09:52:30.183910 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:30 crc kubenswrapper[4780]: I0219 09:52:30.220138 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" podStartSLOduration=3.220097541 podStartE2EDuration="3.220097541s" podCreationTimestamp="2026-02-19 09:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:30.210933545 +0000 UTC m=+5492.954590994" watchObservedRunningTime="2026-02-19 09:52:30.220097541 +0000 UTC m=+5492.963754990" Feb 19 09:52:36 crc kubenswrapper[4780]: I0219 09:52:36.336635 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:52:36 crc kubenswrapper[4780]: I0219 09:52:36.337336 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:52:36 crc kubenswrapper[4780]: I0219 09:52:36.337404 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:52:36 crc kubenswrapper[4780]: I0219 09:52:36.338411 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:52:36 crc kubenswrapper[4780]: I0219 09:52:36.338491 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763" gracePeriod=600 Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.255470 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763" exitCode=0 Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.255680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763"} Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.255783 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df"} Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.255800 4780 scope.go:117] "RemoveContainer" containerID="5c1cbbbb7740e8139276419e0bb5456dc361ffea0d2e97c2b10cd56932a638f6" Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.733575 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.838426 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:52:37 crc kubenswrapper[4780]: I0219 09:52:37.838984 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="dnsmasq-dns" containerID="cri-o://5c0b0a1f514282c43c570dd2259f7aa329151f8e6f8c8064fc6792afa781ebc5" gracePeriod=10 Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.265351 4780 generic.go:334] "Generic (PLEG): container finished" podID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerID="5c0b0a1f514282c43c570dd2259f7aa329151f8e6f8c8064fc6792afa781ebc5" exitCode=0 Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.265544 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" event={"ID":"8e6a81ab-c7b2-429e-a438-09c4268ceee2","Type":"ContainerDied","Data":"5c0b0a1f514282c43c570dd2259f7aa329151f8e6f8c8064fc6792afa781ebc5"} Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.434756 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.530708 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb\") pod \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.530759 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxh7m\" (UniqueName: \"kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m\") pod \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.530844 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb\") pod \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.530985 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc\") pod \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.531117 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config\") pod \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\" (UID: \"8e6a81ab-c7b2-429e-a438-09c4268ceee2\") " Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.556400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m" (OuterVolumeSpecName: "kube-api-access-hxh7m") pod "8e6a81ab-c7b2-429e-a438-09c4268ceee2" (UID: "8e6a81ab-c7b2-429e-a438-09c4268ceee2"). InnerVolumeSpecName "kube-api-access-hxh7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.601892 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config" (OuterVolumeSpecName: "config") pod "8e6a81ab-c7b2-429e-a438-09c4268ceee2" (UID: "8e6a81ab-c7b2-429e-a438-09c4268ceee2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.638182 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.638224 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxh7m\" (UniqueName: \"kubernetes.io/projected/8e6a81ab-c7b2-429e-a438-09c4268ceee2-kube-api-access-hxh7m\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.638864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e6a81ab-c7b2-429e-a438-09c4268ceee2" (UID: "8e6a81ab-c7b2-429e-a438-09c4268ceee2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.702737 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e6a81ab-c7b2-429e-a438-09c4268ceee2" (UID: "8e6a81ab-c7b2-429e-a438-09c4268ceee2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.720917 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e6a81ab-c7b2-429e-a438-09c4268ceee2" (UID: "8e6a81ab-c7b2-429e-a438-09c4268ceee2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.740348 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.740401 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:38 crc kubenswrapper[4780]: I0219 09:52:38.740415 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e6a81ab-c7b2-429e-a438-09c4268ceee2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.283818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" event={"ID":"8e6a81ab-c7b2-429e-a438-09c4268ceee2","Type":"ContainerDied","Data":"da785d77b8d70cc1b3c3287bd0da44a50bddca24618642597736ddf94ea07ebc"} Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.283856 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4766b9-h5zqn" Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.283875 4780 scope.go:117] "RemoveContainer" containerID="5c0b0a1f514282c43c570dd2259f7aa329151f8e6f8c8064fc6792afa781ebc5" Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.325419 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.330690 4780 scope.go:117] "RemoveContainer" containerID="131d93aded5cc873d9e4f37a23ac1ee0950fdf430209547b98e466635beaa688" Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.337514 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cc4766b9-h5zqn"] Feb 19 09:52:39 crc kubenswrapper[4780]: I0219 09:52:39.951911 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" path="/var/lib/kubelet/pods/8e6a81ab-c7b2-429e-a438-09c4268ceee2/volumes" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.636955 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f2b9-account-create-update-4tz6x"] Feb 19 09:52:40 crc kubenswrapper[4780]: E0219 09:52:40.638171 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="init" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.638197 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="init" Feb 19 09:52:40 crc kubenswrapper[4780]: E0219 09:52:40.638257 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="dnsmasq-dns" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.638266 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="dnsmasq-dns" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.638501 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6a81ab-c7b2-429e-a438-09c4268ceee2" containerName="dnsmasq-dns" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.639469 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.643785 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.651083 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8l9ts"] Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.653066 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.662245 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f2b9-account-create-update-4tz6x"] Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.676043 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8l9ts"] Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.789773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgp5\" (UniqueName: \"kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.789816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.789951 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr22w\" (UniqueName: \"kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.790050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.891760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr22w\" (UniqueName: \"kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.891843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.891937 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgp5\" (UniqueName: \"kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.891960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.892908 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.892916 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.914152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr22w\" (UniqueName: \"kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w\") pod \"cinder-f2b9-account-create-update-4tz6x\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.916692 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgp5\" (UniqueName: \"kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5\") pod \"cinder-db-create-8l9ts\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.982683 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:40 crc kubenswrapper[4780]: I0219 09:52:40.997403 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:41 crc kubenswrapper[4780]: I0219 09:52:41.453989 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8l9ts"] Feb 19 09:52:41 crc kubenswrapper[4780]: I0219 09:52:41.576862 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f2b9-account-create-update-4tz6x"] Feb 19 09:52:41 crc kubenswrapper[4780]: W0219 09:52:41.583073 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba5a781_5b30_4c8c_b25e_55a268eb2353.slice/crio-07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1 WatchSource:0}: Error finding container 07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1: Status 404 returned error can't find the container with id 07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1 Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.317745 4780 generic.go:334] "Generic (PLEG): container finished" podID="285bbba2-f55d-49c4-af5a-cabda95e1597" containerID="8b9986a86cd31b978e536d485323646ffa530cbf1565d28d0ea4056169645c6b" exitCode=0 Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.317878 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8l9ts" event={"ID":"285bbba2-f55d-49c4-af5a-cabda95e1597","Type":"ContainerDied","Data":"8b9986a86cd31b978e536d485323646ffa530cbf1565d28d0ea4056169645c6b"} Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.318280 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8l9ts" event={"ID":"285bbba2-f55d-49c4-af5a-cabda95e1597","Type":"ContainerStarted","Data":"db45f8a6f319300ec41ddb704a439969bdaece12fe3e2be6bb3997c87fb7242e"} Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.320827 4780 generic.go:334] "Generic (PLEG): container finished" podID="5ba5a781-5b30-4c8c-b25e-55a268eb2353" containerID="f53502caa130f7fe7be08951a0a90f58a9df789b99aac9addf21fbec89e56809" exitCode=0 Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.320886 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f2b9-account-create-update-4tz6x" event={"ID":"5ba5a781-5b30-4c8c-b25e-55a268eb2353","Type":"ContainerDied","Data":"f53502caa130f7fe7be08951a0a90f58a9df789b99aac9addf21fbec89e56809"} Feb 19 09:52:42 crc kubenswrapper[4780]: I0219 09:52:42.320928 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f2b9-account-create-update-4tz6x" event={"ID":"5ba5a781-5b30-4c8c-b25e-55a268eb2353","Type":"ContainerStarted","Data":"07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1"} Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.716752 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.721535 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.875848 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts\") pod \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.875898 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vgp5\" (UniqueName: \"kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5\") pod \"285bbba2-f55d-49c4-af5a-cabda95e1597\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.875935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr22w\" (UniqueName: \"kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w\") pod \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\" (UID: \"5ba5a781-5b30-4c8c-b25e-55a268eb2353\") " Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.876039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts\") pod \"285bbba2-f55d-49c4-af5a-cabda95e1597\" (UID: \"285bbba2-f55d-49c4-af5a-cabda95e1597\") " Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.876549 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "285bbba2-f55d-49c4-af5a-cabda95e1597" (UID: "285bbba2-f55d-49c4-af5a-cabda95e1597"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.876570 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ba5a781-5b30-4c8c-b25e-55a268eb2353" (UID: "5ba5a781-5b30-4c8c-b25e-55a268eb2353"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.877431 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba5a781-5b30-4c8c-b25e-55a268eb2353-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.877454 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/285bbba2-f55d-49c4-af5a-cabda95e1597-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.881503 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w" (OuterVolumeSpecName: "kube-api-access-mr22w") pod "5ba5a781-5b30-4c8c-b25e-55a268eb2353" (UID: "5ba5a781-5b30-4c8c-b25e-55a268eb2353"). InnerVolumeSpecName "kube-api-access-mr22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.882119 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5" (OuterVolumeSpecName: "kube-api-access-5vgp5") pod "285bbba2-f55d-49c4-af5a-cabda95e1597" (UID: "285bbba2-f55d-49c4-af5a-cabda95e1597"). InnerVolumeSpecName "kube-api-access-5vgp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.979664 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr22w\" (UniqueName: \"kubernetes.io/projected/5ba5a781-5b30-4c8c-b25e-55a268eb2353-kube-api-access-mr22w\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:43 crc kubenswrapper[4780]: I0219 09:52:43.979710 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vgp5\" (UniqueName: \"kubernetes.io/projected/285bbba2-f55d-49c4-af5a-cabda95e1597-kube-api-access-5vgp5\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.341998 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8l9ts" Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.342015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8l9ts" event={"ID":"285bbba2-f55d-49c4-af5a-cabda95e1597","Type":"ContainerDied","Data":"db45f8a6f319300ec41ddb704a439969bdaece12fe3e2be6bb3997c87fb7242e"} Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.342451 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db45f8a6f319300ec41ddb704a439969bdaece12fe3e2be6bb3997c87fb7242e" Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.343914 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f2b9-account-create-update-4tz6x" event={"ID":"5ba5a781-5b30-4c8c-b25e-55a268eb2353","Type":"ContainerDied","Data":"07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1"} Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.343962 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d98b79eba930313399765cec50aec13b2b1a95e57a6d5e17bc46e4a7b0e7d1" Feb 19 09:52:44 crc kubenswrapper[4780]: I0219 09:52:44.343992 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f2b9-account-create-update-4tz6x" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.775158 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gpgsw"] Feb 19 09:52:45 crc kubenswrapper[4780]: E0219 09:52:45.775615 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba5a781-5b30-4c8c-b25e-55a268eb2353" containerName="mariadb-account-create-update" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.775633 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba5a781-5b30-4c8c-b25e-55a268eb2353" containerName="mariadb-account-create-update" Feb 19 09:52:45 crc kubenswrapper[4780]: E0219 09:52:45.775662 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285bbba2-f55d-49c4-af5a-cabda95e1597" containerName="mariadb-database-create" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.775671 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="285bbba2-f55d-49c4-af5a-cabda95e1597" containerName="mariadb-database-create" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.775899 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba5a781-5b30-4c8c-b25e-55a268eb2353" containerName="mariadb-account-create-update" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.775919 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="285bbba2-f55d-49c4-af5a-cabda95e1597" containerName="mariadb-database-create" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.776526 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.781802 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.781874 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qwlvq" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.781905 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.784917 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gpgsw"] Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.911073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.911462 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.911540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.911576 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.911826 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:45 crc kubenswrapper[4780]: I0219 09:52:45.912076 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrhr8\" (UniqueName: \"kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.013754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.013842 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrhr8\" (UniqueName: \"kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.013890 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.013939 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.013980 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.014014 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.014810 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.019638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.019671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.020114 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.020460 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.032393 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrhr8\" (UniqueName: \"kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8\") pod \"cinder-db-sync-gpgsw\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.094288 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:46 crc kubenswrapper[4780]: W0219 09:52:46.604202 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc893a481_a297_4a71_8aed_a90c65624477.slice/crio-58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a WatchSource:0}: Error finding container 58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a: Status 404 returned error can't find the container with id 58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a Feb 19 09:52:46 crc kubenswrapper[4780]: I0219 09:52:46.613434 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gpgsw"] Feb 19 09:52:47 crc kubenswrapper[4780]: I0219 09:52:47.374061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpgsw" event={"ID":"c893a481-a297-4a71-8aed-a90c65624477","Type":"ContainerStarted","Data":"1275bb9bba97972a48e7bb4529a071acc50ad1b27f970d0705195d2f664f57b0"} Feb 19 09:52:47 crc kubenswrapper[4780]: I0219 09:52:47.374418 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpgsw" event={"ID":"c893a481-a297-4a71-8aed-a90c65624477","Type":"ContainerStarted","Data":"58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a"} Feb 19 09:52:47 crc kubenswrapper[4780]: I0219 09:52:47.394995 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gpgsw" podStartSLOduration=2.394977107 podStartE2EDuration="2.394977107s" podCreationTimestamp="2026-02-19 09:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:47.39311304 +0000 UTC m=+5510.136770529" watchObservedRunningTime="2026-02-19 09:52:47.394977107 +0000 UTC m=+5510.138634556" Feb 19 09:52:50 crc kubenswrapper[4780]: I0219 09:52:50.425669 4780 generic.go:334] "Generic (PLEG): container finished" podID="c893a481-a297-4a71-8aed-a90c65624477" containerID="1275bb9bba97972a48e7bb4529a071acc50ad1b27f970d0705195d2f664f57b0" exitCode=0 Feb 19 09:52:50 crc kubenswrapper[4780]: I0219 09:52:50.425761 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpgsw" event={"ID":"c893a481-a297-4a71-8aed-a90c65624477","Type":"ContainerDied","Data":"1275bb9bba97972a48e7bb4529a071acc50ad1b27f970d0705195d2f664f57b0"} Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.763343 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927245 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927297 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrhr8\" (UniqueName: \"kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927369 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927463 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927560 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.927645 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data\") pod \"c893a481-a297-4a71-8aed-a90c65624477\" (UID: \"c893a481-a297-4a71-8aed-a90c65624477\") " Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.928834 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c893a481-a297-4a71-8aed-a90c65624477-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.934352 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts" (OuterVolumeSpecName: "scripts") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.937464 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.957602 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.971830 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8" (OuterVolumeSpecName: "kube-api-access-mrhr8") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "kube-api-access-mrhr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:52:51 crc kubenswrapper[4780]: I0219 09:52:51.988429 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data" (OuterVolumeSpecName: "config-data") pod "c893a481-a297-4a71-8aed-a90c65624477" (UID: "c893a481-a297-4a71-8aed-a90c65624477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.030807 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.030844 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.030854 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.030863 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893a481-a297-4a71-8aed-a90c65624477-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.030872 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrhr8\" (UniqueName: \"kubernetes.io/projected/c893a481-a297-4a71-8aed-a90c65624477-kube-api-access-mrhr8\") on node \"crc\" DevicePath \"\"" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.450757 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpgsw" event={"ID":"c893a481-a297-4a71-8aed-a90c65624477","Type":"ContainerDied","Data":"58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a"} Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.450799 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58539e74da69b721a467bcdbf08c831ec075dd9110a6ba27794af77164ce011a" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.450872 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpgsw" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.773838 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 09:52:52 crc kubenswrapper[4780]: E0219 09:52:52.775681 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c893a481-a297-4a71-8aed-a90c65624477" containerName="cinder-db-sync" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.775756 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c893a481-a297-4a71-8aed-a90c65624477" containerName="cinder-db-sync" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.776045 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c893a481-a297-4a71-8aed-a90c65624477" containerName="cinder-db-sync" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.777073 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.826315 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.852396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.852496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.852569 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.852757 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.852841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqng\" (UniqueName: \"kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.960024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.960102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.960165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.960248 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.960297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwqng\" (UniqueName: \"kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.961735 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.962088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.962453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.963030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:52 crc kubenswrapper[4780]: I0219 09:52:52.991216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwqng\" (UniqueName: \"kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng\") pod \"dnsmasq-dns-5c857455cc-2kg4n\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.006093 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.008032 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.010121 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qwlvq" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.012546 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.012937 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.013335 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.087620 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.104071 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.163927 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164037 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bn4\" (UniqueName: \"kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164208 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164263 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164341 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.164409 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.266789 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.276631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.276984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.277096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.277167 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bn4\" (UniqueName: \"kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.277211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.277253 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.277340 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.280939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.283150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.284406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.284754 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.286414 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.304803 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bn4\" (UniqueName: \"kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4\") pod \"cinder-api-0\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.387035 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.713100 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 09:52:53 crc kubenswrapper[4780]: W0219 09:52:53.723460 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac612177_68e7_431e_aaa2_f21833ccaa6e.slice/crio-71398f689b8e82d6bdea2e8bafa008e5472f113f0f2c402e43c05f4a1a6dea9a WatchSource:0}: Error finding container 71398f689b8e82d6bdea2e8bafa008e5472f113f0f2c402e43c05f4a1a6dea9a: Status 404 returned error can't find the container with id 71398f689b8e82d6bdea2e8bafa008e5472f113f0f2c402e43c05f4a1a6dea9a Feb 19 09:52:53 crc kubenswrapper[4780]: I0219 09:52:53.988014 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:52:54 crc kubenswrapper[4780]: W0219 09:52:54.013894 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebefc259_5da4_4803_b476_6bdaa0191385.slice/crio-c86c76b22eb88bc79bc86a4dabbbb8fd6ff9977b8e49db04e649ae2db320b007 WatchSource:0}: Error finding container c86c76b22eb88bc79bc86a4dabbbb8fd6ff9977b8e49db04e649ae2db320b007: Status 404 returned error can't find the container with id c86c76b22eb88bc79bc86a4dabbbb8fd6ff9977b8e49db04e649ae2db320b007 Feb 19 09:52:54 crc kubenswrapper[4780]: I0219 09:52:54.488495 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerID="53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e" exitCode=0 Feb 19 09:52:54 crc kubenswrapper[4780]: I0219 09:52:54.488684 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" event={"ID":"ac612177-68e7-431e-aaa2-f21833ccaa6e","Type":"ContainerDied","Data":"53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e"} Feb 19 09:52:54 crc kubenswrapper[4780]: I0219 09:52:54.488792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" event={"ID":"ac612177-68e7-431e-aaa2-f21833ccaa6e","Type":"ContainerStarted","Data":"71398f689b8e82d6bdea2e8bafa008e5472f113f0f2c402e43c05f4a1a6dea9a"} Feb 19 09:52:54 crc kubenswrapper[4780]: I0219 09:52:54.494342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerStarted","Data":"c86c76b22eb88bc79bc86a4dabbbb8fd6ff9977b8e49db04e649ae2db320b007"} Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.506962 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" event={"ID":"ac612177-68e7-431e-aaa2-f21833ccaa6e","Type":"ContainerStarted","Data":"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba"} Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.507830 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.509732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerStarted","Data":"62c8507ccef87cdf2ceefe5078cd2da8763f3d648af48437ab86e3f9c49842f9"} Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.509766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerStarted","Data":"95c82788501d5f7a44387bb6bd1914be2bffac7a4c40486cfe0e67dcc4572351"} Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.510758 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.532103 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" podStartSLOduration=3.532084263 podStartE2EDuration="3.532084263s" podCreationTimestamp="2026-02-19 09:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:55.530551764 +0000 UTC m=+5518.274209213" watchObservedRunningTime="2026-02-19 09:52:55.532084263 +0000 UTC m=+5518.275741712" Feb 19 09:52:55 crc kubenswrapper[4780]: I0219 09:52:55.557215 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.557191478 podStartE2EDuration="3.557191478s" podCreationTimestamp="2026-02-19 09:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:52:55.545532288 +0000 UTC m=+5518.289189727" watchObservedRunningTime="2026-02-19 09:52:55.557191478 +0000 UTC m=+5518.300848947" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.106413 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.193285 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.193532 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="dnsmasq-dns" containerID="cri-o://6cbe8e5219dfa9b4560749c5af5f9b0200c2dbb94187ab4e6a360d0cffda1c7e" gracePeriod=10 Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.589154 4780 generic.go:334] "Generic (PLEG): container finished" podID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerID="6cbe8e5219dfa9b4560749c5af5f9b0200c2dbb94187ab4e6a360d0cffda1c7e" exitCode=0 Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.589574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" event={"ID":"db016b33-909a-42cc-ae7b-2112ae0a3f55","Type":"ContainerDied","Data":"6cbe8e5219dfa9b4560749c5af5f9b0200c2dbb94187ab4e6a360d0cffda1c7e"} Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.655207 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.795353 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb\") pod \"db016b33-909a-42cc-ae7b-2112ae0a3f55\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.795471 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb\") pod \"db016b33-909a-42cc-ae7b-2112ae0a3f55\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.795568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsv7w\" (UniqueName: \"kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w\") pod \"db016b33-909a-42cc-ae7b-2112ae0a3f55\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.795611 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config\") pod \"db016b33-909a-42cc-ae7b-2112ae0a3f55\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.795642 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc\") pod \"db016b33-909a-42cc-ae7b-2112ae0a3f55\" (UID: \"db016b33-909a-42cc-ae7b-2112ae0a3f55\") " Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.828682 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w" (OuterVolumeSpecName: "kube-api-access-tsv7w") pod "db016b33-909a-42cc-ae7b-2112ae0a3f55" (UID: "db016b33-909a-42cc-ae7b-2112ae0a3f55"). InnerVolumeSpecName "kube-api-access-tsv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.875963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db016b33-909a-42cc-ae7b-2112ae0a3f55" (UID: "db016b33-909a-42cc-ae7b-2112ae0a3f55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.889309 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db016b33-909a-42cc-ae7b-2112ae0a3f55" (UID: "db016b33-909a-42cc-ae7b-2112ae0a3f55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.890832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config" (OuterVolumeSpecName: "config") pod "db016b33-909a-42cc-ae7b-2112ae0a3f55" (UID: "db016b33-909a-42cc-ae7b-2112ae0a3f55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.897277 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.897312 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsv7w\" (UniqueName: \"kubernetes.io/projected/db016b33-909a-42cc-ae7b-2112ae0a3f55-kube-api-access-tsv7w\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.897324 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.897336 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:03 crc kubenswrapper[4780]: I0219 09:53:03.902027 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db016b33-909a-42cc-ae7b-2112ae0a3f55" (UID: "db016b33-909a-42cc-ae7b-2112ae0a3f55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.000063 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db016b33-909a-42cc-ae7b-2112ae0a3f55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.601290 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" event={"ID":"db016b33-909a-42cc-ae7b-2112ae0a3f55","Type":"ContainerDied","Data":"07e38bbb8b7fe109d7fd39155041b15bdbba221b5c20c0f1032fac59a8cca28a"} Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.601674 4780 scope.go:117] "RemoveContainer" containerID="6cbe8e5219dfa9b4560749c5af5f9b0200c2dbb94187ab4e6a360d0cffda1c7e" Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.601520 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6896c5ffb9-mdq5f" Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.630422 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.633767 4780 scope.go:117] "RemoveContainer" containerID="04e5f5443307a4420b573f9d14b7f4c451612ed6bdb25c234feec394ad4ee5fb" Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.639959 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6896c5ffb9-mdq5f"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.842481 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.843265 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" containerID="cri-o://a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.855417 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.856030 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3785f226-0b67-49df-bbf8-047fab757679" containerName="nova-cell0-conductor-conductor" containerID="cri-o://1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.866656 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.866942 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.912166 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.912418 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" containerID="cri-o://b90fcee3ace85af0cbeaffdcb5667f271e0f2e676abbaf3d21d755971c73fb39" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.912812 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" containerID="cri-o://c91f4df3a7b05366cc3c6a57225821aca2e882de4b6e89b0b409e58a7e79ae1f" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.943707 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.944017 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" containerID="cri-o://d75772189fc915cf3ecac5e5c9aafe3adcecdd11892e1b385b4bc3197f826d85" gracePeriod=30 Feb 19 09:53:04 crc kubenswrapper[4780]: I0219 09:53:04.945625 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" containerID="cri-o://5aa1ecb1e620d82ffcffa56654ce35fbbfd94b5dae5324ba71a051adac823a19" gracePeriod=30 Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.491243 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.614070 4780 generic.go:334] "Generic (PLEG): container finished" podID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerID="b90fcee3ace85af0cbeaffdcb5667f271e0f2e676abbaf3d21d755971c73fb39" exitCode=143 Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.614148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerDied","Data":"b90fcee3ace85af0cbeaffdcb5667f271e0f2e676abbaf3d21d755971c73fb39"} Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.616359 4780 generic.go:334] "Generic (PLEG): container finished" podID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerID="d75772189fc915cf3ecac5e5c9aafe3adcecdd11892e1b385b4bc3197f826d85" exitCode=143 Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.616416 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerDied","Data":"d75772189fc915cf3ecac5e5c9aafe3adcecdd11892e1b385b4bc3197f826d85"} Feb 19 09:53:05 crc kubenswrapper[4780]: I0219 09:53:05.955517 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" path="/var/lib/kubelet/pods/db016b33-909a-42cc-ae7b-2112ae0a3f55/volumes" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.191205 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.379854 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhc9b\" (UniqueName: \"kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b\") pod \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.379915 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data\") pod \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.379963 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle\") pod \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\" (UID: \"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5\") " Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.390313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b" (OuterVolumeSpecName: "kube-api-access-bhc9b") pod "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" (UID: "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5"). InnerVolumeSpecName "kube-api-access-bhc9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.413284 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data" (OuterVolumeSpecName: "config-data") pod "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" (UID: "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.435032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" (UID: "9ac84136-9e8b-42e7-aaa5-6a56c39f50d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.442685 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.447154 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.448568 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.448609 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.482287 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.482557 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.482641 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhc9b\" (UniqueName: \"kubernetes.io/projected/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5-kube-api-access-bhc9b\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.639529 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" containerID="7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86" exitCode=0 Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.639595 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5","Type":"ContainerDied","Data":"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86"} Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.639628 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ac84136-9e8b-42e7-aaa5-6a56c39f50d5","Type":"ContainerDied","Data":"3803806226cc3d13aaeade202ee9fbb19ba2a6d2244f62bb581c5e6ccab8651b"} Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.639657 4780 scope.go:117] "RemoveContainer" containerID="7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.639795 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.665545 4780 scope.go:117] "RemoveContainer" containerID="7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86" Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.670269 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86\": container with ID starting with 7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86 not found: ID does not exist" containerID="7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.670422 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86"} err="failed to get container status \"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86\": rpc error: code = NotFound desc = could not find container \"7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86\": container with ID starting with 7d2e87874560812a8d099b159dfe3e76b7370c9e9be904ca34a85e9b5927cb86 not found: ID does not exist" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.688008 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.697038 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.724476 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.724934 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="dnsmasq-dns" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.724954 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="dnsmasq-dns" Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.724975 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="init" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.724984 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="init" Feb 19 09:53:06 crc kubenswrapper[4780]: E0219 09:53:06.725008 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.725014 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.725205 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.725226 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="db016b33-909a-42cc-ae7b-2112ae0a3f55" containerName="dnsmasq-dns" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.725916 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.729387 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.751247 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.891963 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.892228 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.892621 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d994\" (UniqueName: \"kubernetes.io/projected/4545c739-d057-47e0-820b-a3e73c74ecd8-kube-api-access-2d994\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.994542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.994650 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d994\" (UniqueName: \"kubernetes.io/projected/4545c739-d057-47e0-820b-a3e73c74ecd8-kube-api-access-2d994\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.994714 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:06 crc kubenswrapper[4780]: I0219 09:53:06.998929 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.010728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4545c739-d057-47e0-820b-a3e73c74ecd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.011835 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d994\" (UniqueName: \"kubernetes.io/projected/4545c739-d057-47e0-820b-a3e73c74ecd8-kube-api-access-2d994\") pod \"nova-cell1-novncproxy-0\" (UID: \"4545c739-d057-47e0-820b-a3e73c74ecd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.059429 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.544773 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.590747 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.664305 4780 generic.go:334] "Generic (PLEG): container finished" podID="3785f226-0b67-49df-bbf8-047fab757679" containerID="1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377" exitCode=0 Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.664377 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3785f226-0b67-49df-bbf8-047fab757679","Type":"ContainerDied","Data":"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377"} Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.664411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3785f226-0b67-49df-bbf8-047fab757679","Type":"ContainerDied","Data":"fe9e7483c60807449b2320f031b0a5022846db06362ebcf651a2af0d8baae146"} Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.664434 4780 scope.go:117] "RemoveContainer" containerID="1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.664565 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.667943 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4545c739-d057-47e0-820b-a3e73c74ecd8","Type":"ContainerStarted","Data":"1798f63b14e532b147fb67e21e6aee18e338358448c6584180fe918b87653dfe"} Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.707144 4780 scope.go:117] "RemoveContainer" containerID="1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.707769 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle\") pod \"3785f226-0b67-49df-bbf8-047fab757679\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " Feb 19 09:53:07 crc kubenswrapper[4780]: E0219 09:53:07.707811 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377\": container with ID starting with 1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377 not found: ID does not exist" containerID="1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.707836 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data\") pod \"3785f226-0b67-49df-bbf8-047fab757679\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.707910 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377"} err="failed to get container status \"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377\": rpc error: code = NotFound desc = could not find container \"1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377\": container with ID starting with 1273e56b8414693b20d2bd083736ed39cac769d4030103eaf945f90beaff5377 not found: ID does not exist" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.707995 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bglj8\" (UniqueName: \"kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8\") pod \"3785f226-0b67-49df-bbf8-047fab757679\" (UID: \"3785f226-0b67-49df-bbf8-047fab757679\") " Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.717313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8" (OuterVolumeSpecName: "kube-api-access-bglj8") pod "3785f226-0b67-49df-bbf8-047fab757679" (UID: "3785f226-0b67-49df-bbf8-047fab757679"). InnerVolumeSpecName "kube-api-access-bglj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.746083 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3785f226-0b67-49df-bbf8-047fab757679" (UID: "3785f226-0b67-49df-bbf8-047fab757679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.750227 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data" (OuterVolumeSpecName: "config-data") pod "3785f226-0b67-49df-bbf8-047fab757679" (UID: "3785f226-0b67-49df-bbf8-047fab757679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.815183 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.815380 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bglj8\" (UniqueName: \"kubernetes.io/projected/3785f226-0b67-49df-bbf8-047fab757679-kube-api-access-bglj8\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:07 crc kubenswrapper[4780]: I0219 09:53:07.815444 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3785f226-0b67-49df-bbf8-047fab757679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.002574 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac84136-9e8b-42e7-aaa5-6a56c39f50d5" path="/var/lib/kubelet/pods/9ac84136-9e8b-42e7-aaa5-6a56c39f50d5/volumes" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.061182 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.081182 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.098275 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:08 crc kubenswrapper[4780]: E0219 09:53:08.098778 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3785f226-0b67-49df-bbf8-047fab757679" containerName="nova-cell0-conductor-conductor" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.098794 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3785f226-0b67-49df-bbf8-047fab757679" containerName="nova-cell0-conductor-conductor" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.099082 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3785f226-0b67-49df-bbf8-047fab757679" containerName="nova-cell0-conductor-conductor" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.099927 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.104362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.121594 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.239048 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.239240 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.239303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7fgc\" (UniqueName: \"kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.341357 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7fgc\" (UniqueName: \"kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.341806 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.342168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.346035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.347567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.366373 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7fgc\" (UniqueName: \"kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc\") pod \"nova-cell0-conductor-0\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.438805 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.67:8774/\": read tcp 10.217.0.2:57118->10.217.1.67:8774: read: connection reset by peer" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.438856 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.67:8774/\": read tcp 10.217.0.2:57134->10.217.1.67:8774: read: connection reset by peer" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.446346 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.495993 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.66:8775/\": read tcp 10.217.0.2:45560->10.217.1.66:8775: read: connection reset by peer" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.496004 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.66:8775/\": read tcp 10.217.0.2:45554->10.217.1.66:8775: read: connection reset by peer" Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.501674 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.501969 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ac8d588b-2078-4e44-bcd5-3d31116fc462" containerName="nova-cell1-conductor-conductor" containerID="cri-o://62c2f68ca215a1f039883ee42ca809a1866ec3496f78c274727a2c331858952d" gracePeriod=30 Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.710388 4780 generic.go:334] "Generic (PLEG): container finished" podID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerID="5aa1ecb1e620d82ffcffa56654ce35fbbfd94b5dae5324ba71a051adac823a19" exitCode=0 Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.710502 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerDied","Data":"5aa1ecb1e620d82ffcffa56654ce35fbbfd94b5dae5324ba71a051adac823a19"} Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.714673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4545c739-d057-47e0-820b-a3e73c74ecd8","Type":"ContainerStarted","Data":"3305ff74d16c0fd0459d98303afcc35b536f61ce352f0635da1c7281528e43fe"} Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.721048 4780 generic.go:334] "Generic (PLEG): container finished" podID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerID="c91f4df3a7b05366cc3c6a57225821aca2e882de4b6e89b0b409e58a7e79ae1f" exitCode=0 Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.721118 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerDied","Data":"c91f4df3a7b05366cc3c6a57225821aca2e882de4b6e89b0b409e58a7e79ae1f"} Feb 19 09:53:08 crc kubenswrapper[4780]: I0219 09:53:08.741326 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.741297515 podStartE2EDuration="2.741297515s" podCreationTimestamp="2026-02-19 09:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:08.731060021 +0000 UTC m=+5531.474717480" watchObservedRunningTime="2026-02-19 09:53:08.741297515 +0000 UTC m=+5531.484954984" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.083211 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.089169 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.204028 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.279973 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data\") pod \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280496 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data\") pod \"242a53ad-4ac8-4242-9928-aaed4f41057b\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280563 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzw4k\" (UniqueName: \"kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k\") pod \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280609 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle\") pod \"242a53ad-4ac8-4242-9928-aaed4f41057b\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280650 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle\") pod \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280773 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs\") pod \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\" (UID: \"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280813 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcjj8\" (UniqueName: \"kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8\") pod \"242a53ad-4ac8-4242-9928-aaed4f41057b\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.280851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs\") pod \"242a53ad-4ac8-4242-9928-aaed4f41057b\" (UID: \"242a53ad-4ac8-4242-9928-aaed4f41057b\") " Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.281821 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs" (OuterVolumeSpecName: "logs") pod "242a53ad-4ac8-4242-9928-aaed4f41057b" (UID: "242a53ad-4ac8-4242-9928-aaed4f41057b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.288648 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs" (OuterVolumeSpecName: "logs") pod "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" (UID: "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.296411 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k" (OuterVolumeSpecName: "kube-api-access-dzw4k") pod "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" (UID: "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf"). InnerVolumeSpecName "kube-api-access-dzw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.307016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8" (OuterVolumeSpecName: "kube-api-access-rcjj8") pod "242a53ad-4ac8-4242-9928-aaed4f41057b" (UID: "242a53ad-4ac8-4242-9928-aaed4f41057b"). InnerVolumeSpecName "kube-api-access-rcjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.319923 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data" (OuterVolumeSpecName: "config-data") pod "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" (UID: "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.333569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "242a53ad-4ac8-4242-9928-aaed4f41057b" (UID: "242a53ad-4ac8-4242-9928-aaed4f41057b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.357326 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" (UID: "b9d8cd9e-37da-4df4-b7ee-b6859604f7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.358073 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data" (OuterVolumeSpecName: "config-data") pod "242a53ad-4ac8-4242-9928-aaed4f41057b" (UID: "242a53ad-4ac8-4242-9928-aaed4f41057b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387199 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387248 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcjj8\" (UniqueName: \"kubernetes.io/projected/242a53ad-4ac8-4242-9928-aaed4f41057b-kube-api-access-rcjj8\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387285 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242a53ad-4ac8-4242-9928-aaed4f41057b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387296 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387357 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387371 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzw4k\" (UniqueName: \"kubernetes.io/projected/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-kube-api-access-dzw4k\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387381 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242a53ad-4ac8-4242-9928-aaed4f41057b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.387391 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.729333 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"787ed9b0-4ee5-4eae-bc6c-f465c5655d80","Type":"ContainerStarted","Data":"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1"} Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.729380 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"787ed9b0-4ee5-4eae-bc6c-f465c5655d80","Type":"ContainerStarted","Data":"12142ab449d677d14069058e082da5437bc27650e0d26c5645bf62f75797e1fd"} Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.730478 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.732395 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.732438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"242a53ad-4ac8-4242-9928-aaed4f41057b","Type":"ContainerDied","Data":"af36f2a0c277fa30de31e3ae9f481bc991965677afb5b5f371fc4ad7abec2c2e"} Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.732485 4780 scope.go:117] "RemoveContainer" containerID="c91f4df3a7b05366cc3c6a57225821aca2e882de4b6e89b0b409e58a7e79ae1f" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.739606 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.739846 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9d8cd9e-37da-4df4-b7ee-b6859604f7bf","Type":"ContainerDied","Data":"cad3b2c4fcb3fe5dc01b7650d665f4ef89714cb7a36a3dd1fca9e74552ae285d"} Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.756584 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.75656582 podStartE2EDuration="1.75656582s" podCreationTimestamp="2026-02-19 09:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:09.754793104 +0000 UTC m=+5532.498450563" watchObservedRunningTime="2026-02-19 09:53:09.75656582 +0000 UTC m=+5532.500223289" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.819462 4780 scope.go:117] "RemoveContainer" containerID="b90fcee3ace85af0cbeaffdcb5667f271e0f2e676abbaf3d21d755971c73fb39" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.837613 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.856644 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.859152 4780 scope.go:117] "RemoveContainer" containerID="5aa1ecb1e620d82ffcffa56654ce35fbbfd94b5dae5324ba71a051adac823a19" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.891399 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.894180 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.926514 4780 scope.go:117] "RemoveContainer" containerID="d75772189fc915cf3ecac5e5c9aafe3adcecdd11892e1b385b4bc3197f826d85" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.928188 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:09 crc kubenswrapper[4780]: E0219 09:53:09.928967 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.928989 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" Feb 19 09:53:09 crc kubenswrapper[4780]: E0219 09:53:09.929046 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.929056 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" Feb 19 09:53:09 crc kubenswrapper[4780]: E0219 09:53:09.929087 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.929095 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" Feb 19 09:53:09 crc kubenswrapper[4780]: E0219 09:53:09.929139 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.929146 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.929959 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-log" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.929991 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-api" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.930009 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" containerName="nova-api-log" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.930030 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" containerName="nova-metadata-metadata" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.931738 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.937964 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 09:53:09 crc kubenswrapper[4780]: I0219 09:53:09.972704 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242a53ad-4ac8-4242-9928-aaed4f41057b" path="/var/lib/kubelet/pods/242a53ad-4ac8-4242-9928-aaed4f41057b/volumes" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.003115 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3785f226-0b67-49df-bbf8-047fab757679" path="/var/lib/kubelet/pods/3785f226-0b67-49df-bbf8-047fab757679/volumes" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.003703 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d8cd9e-37da-4df4-b7ee-b6859604f7bf" path="/var/lib/kubelet/pods/b9d8cd9e-37da-4df4-b7ee-b6859604f7bf/volumes" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.013224 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.028095 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.030637 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.033639 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.052493 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.120551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.120850 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.120971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.121054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.121870 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.122377 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.122499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wtb\" (UniqueName: \"kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.122595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnl8\" (UniqueName: \"kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.224796 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225191 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225400 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225582 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wtb\" (UniqueName: \"kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225724 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnl8\" (UniqueName: \"kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225862 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.225970 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.226357 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.237910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.238093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.238364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.251673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wtb\" (UniqueName: \"kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.252035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data\") pod \"nova-api-0\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.254441 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnl8\" (UniqueName: \"kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8\") pod \"nova-metadata-0\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.310587 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.344897 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 09:53:10 crc kubenswrapper[4780]: W0219 09:53:10.817369 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc241731f_b635_4434_97f3_5dd498ef0a3c.slice/crio-06b42a3ed3c050c7a4248f29ae024d4be2489798558d88a29d0bea6e5a01a8ec WatchSource:0}: Error finding container 06b42a3ed3c050c7a4248f29ae024d4be2489798558d88a29d0bea6e5a01a8ec: Status 404 returned error can't find the container with id 06b42a3ed3c050c7a4248f29ae024d4be2489798558d88a29d0bea6e5a01a8ec Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.828438 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 09:53:10 crc kubenswrapper[4780]: I0219 09:53:10.942812 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 09:53:11 crc kubenswrapper[4780]: E0219 09:53:11.443644 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:11 crc kubenswrapper[4780]: E0219 09:53:11.451997 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:11 crc kubenswrapper[4780]: E0219 09:53:11.454051 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 09:53:11 crc kubenswrapper[4780]: E0219 09:53:11.454101 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.788552 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerStarted","Data":"77e0ab4bc2048225d987e1e6766923342e5d61ad2f0992703e85fee058c481a3"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.789700 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerStarted","Data":"eef77213377a0841e0cdfbadd5c5c2cad72d804251ad843f81a3ac8c1eeb9ef5"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.789724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerStarted","Data":"06b42a3ed3c050c7a4248f29ae024d4be2489798558d88a29d0bea6e5a01a8ec"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.792505 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerStarted","Data":"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.792540 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerStarted","Data":"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.792552 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerStarted","Data":"79bcfcd47175202f2523e98948d8cfc823b274f46bde1b1297d2f98e802d4046"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.794697 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac8d588b-2078-4e44-bcd5-3d31116fc462" containerID="62c2f68ca215a1f039883ee42ca809a1866ec3496f78c274727a2c331858952d" exitCode=0 Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.794769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac8d588b-2078-4e44-bcd5-3d31116fc462","Type":"ContainerDied","Data":"62c2f68ca215a1f039883ee42ca809a1866ec3496f78c274727a2c331858952d"} Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.824720 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.824689777 podStartE2EDuration="2.824689777s" podCreationTimestamp="2026-02-19 09:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:11.823794574 +0000 UTC m=+5534.567452023" watchObservedRunningTime="2026-02-19 09:53:11.824689777 +0000 UTC m=+5534.568347226" Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.860355 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.860329115 podStartE2EDuration="2.860329115s" podCreationTimestamp="2026-02-19 09:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:11.848707606 +0000 UTC m=+5534.592365155" watchObservedRunningTime="2026-02-19 09:53:11.860329115 +0000 UTC m=+5534.603986564" Feb 19 09:53:11 crc kubenswrapper[4780]: I0219 09:53:11.981974 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.062293 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.162281 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle\") pod \"ac8d588b-2078-4e44-bcd5-3d31116fc462\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.162417 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jqqx\" (UniqueName: \"kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx\") pod \"ac8d588b-2078-4e44-bcd5-3d31116fc462\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.162492 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data\") pod \"ac8d588b-2078-4e44-bcd5-3d31116fc462\" (UID: \"ac8d588b-2078-4e44-bcd5-3d31116fc462\") " Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.169003 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx" (OuterVolumeSpecName: "kube-api-access-5jqqx") pod "ac8d588b-2078-4e44-bcd5-3d31116fc462" (UID: "ac8d588b-2078-4e44-bcd5-3d31116fc462"). InnerVolumeSpecName "kube-api-access-5jqqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.190906 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data" (OuterVolumeSpecName: "config-data") pod "ac8d588b-2078-4e44-bcd5-3d31116fc462" (UID: "ac8d588b-2078-4e44-bcd5-3d31116fc462"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.195577 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac8d588b-2078-4e44-bcd5-3d31116fc462" (UID: "ac8d588b-2078-4e44-bcd5-3d31116fc462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.266390 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.266434 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jqqx\" (UniqueName: \"kubernetes.io/projected/ac8d588b-2078-4e44-bcd5-3d31116fc462-kube-api-access-5jqqx\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.266445 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8d588b-2078-4e44-bcd5-3d31116fc462-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.806798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac8d588b-2078-4e44-bcd5-3d31116fc462","Type":"ContainerDied","Data":"7424fa027693fdb5d43c618ad19252daca9889732b92503ca9f16b95c57902b7"} Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.806868 4780 scope.go:117] "RemoveContainer" containerID="62c2f68ca215a1f039883ee42ca809a1866ec3496f78c274727a2c331858952d" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.806871 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.861053 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.882271 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.932263 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:12 crc kubenswrapper[4780]: E0219 09:53:12.933203 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8d588b-2078-4e44-bcd5-3d31116fc462" containerName="nova-cell1-conductor-conductor" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.933221 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8d588b-2078-4e44-bcd5-3d31116fc462" containerName="nova-cell1-conductor-conductor" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.933439 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8d588b-2078-4e44-bcd5-3d31116fc462" containerName="nova-cell1-conductor-conductor" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.934354 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.936737 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 09:53:12 crc kubenswrapper[4780]: I0219 09:53:12.944108 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.083408 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.083473 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ss8s\" (UniqueName: \"kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.083515 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.185695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.185867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ss8s\" (UniqueName: \"kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.185925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.191550 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.192496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.201856 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ss8s\" (UniqueName: \"kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s\") pod \"nova-cell1-conductor-0\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.263189 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.697363 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.817797 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac835368-9f36-491e-8684-ad0fbd976e4a","Type":"ContainerStarted","Data":"e25f04b6be1524eaab70545f27becc8d535ef022caa744c64684c1a64183a3b0"} Feb 19 09:53:13 crc kubenswrapper[4780]: I0219 09:53:13.954061 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8d588b-2078-4e44-bcd5-3d31116fc462" path="/var/lib/kubelet/pods/ac8d588b-2078-4e44-bcd5-3d31116fc462/volumes" Feb 19 09:53:14 crc kubenswrapper[4780]: I0219 09:53:14.826321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac835368-9f36-491e-8684-ad0fbd976e4a","Type":"ContainerStarted","Data":"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41"} Feb 19 09:53:14 crc kubenswrapper[4780]: I0219 09:53:14.826744 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:14 crc kubenswrapper[4780]: I0219 09:53:14.859033 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8590149670000002 podStartE2EDuration="2.859014967s" podCreationTimestamp="2026-02-19 09:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:14.853611798 +0000 UTC m=+5537.597269247" watchObservedRunningTime="2026-02-19 09:53:14.859014967 +0000 UTC m=+5537.602672416" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.345061 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.345344 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.500763 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.628480 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle\") pod \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.628521 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data\") pod \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.628721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksds5\" (UniqueName: \"kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5\") pod \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\" (UID: \"ced16824-d9d2-4aea-8cb8-b50a32f1963e\") " Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.641301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5" (OuterVolumeSpecName: "kube-api-access-ksds5") pod "ced16824-d9d2-4aea-8cb8-b50a32f1963e" (UID: "ced16824-d9d2-4aea-8cb8-b50a32f1963e"). InnerVolumeSpecName "kube-api-access-ksds5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.653078 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data" (OuterVolumeSpecName: "config-data") pod "ced16824-d9d2-4aea-8cb8-b50a32f1963e" (UID: "ced16824-d9d2-4aea-8cb8-b50a32f1963e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.658339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ced16824-d9d2-4aea-8cb8-b50a32f1963e" (UID: "ced16824-d9d2-4aea-8cb8-b50a32f1963e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.734501 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksds5\" (UniqueName: \"kubernetes.io/projected/ced16824-d9d2-4aea-8cb8-b50a32f1963e-kube-api-access-ksds5\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.734546 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.734559 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced16824-d9d2-4aea-8cb8-b50a32f1963e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.835523 4780 generic.go:334] "Generic (PLEG): container finished" podID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" exitCode=0 Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.836530 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.838979 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ced16824-d9d2-4aea-8cb8-b50a32f1963e","Type":"ContainerDied","Data":"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5"} Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.839087 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ced16824-d9d2-4aea-8cb8-b50a32f1963e","Type":"ContainerDied","Data":"a85c91918fee96fd4ac3eba48e33a206d8c3e32a67b604f231285571d55cf5be"} Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.839138 4780 scope.go:117] "RemoveContainer" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.865147 4780 scope.go:117] "RemoveContainer" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" Feb 19 09:53:15 crc kubenswrapper[4780]: E0219 09:53:15.868469 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5\": container with ID starting with a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5 not found: ID does not exist" containerID="a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.868519 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5"} err="failed to get container status \"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5\": rpc error: code = NotFound desc = could not find container \"a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5\": container with ID starting with a28a260585a3b62c217f8ede3c794f0dfbb3e4d903cd8ba3b811239661dfcbc5 not found: ID does not exist" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.872078 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.889264 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.929661 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:15 crc kubenswrapper[4780]: E0219 09:53:15.930917 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.930943 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.931420 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" containerName="nova-scheduler-scheduler" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.932625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.938151 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.972684 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced16824-d9d2-4aea-8cb8-b50a32f1963e" path="/var/lib/kubelet/pods/ced16824-d9d2-4aea-8cb8-b50a32f1963e/volumes" Feb 19 09:53:15 crc kubenswrapper[4780]: I0219 09:53:15.973263 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.044589 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.044653 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.044748 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn94v\" (UniqueName: \"kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.146137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.146466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.146518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn94v\" (UniqueName: \"kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.150585 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.151191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.177730 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn94v\" (UniqueName: \"kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v\") pod \"nova-scheduler-0\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.269036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.723846 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 09:53:16 crc kubenswrapper[4780]: I0219 09:53:16.851564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96bd89b2-5989-4451-86e2-9a92c57390fa","Type":"ContainerStarted","Data":"d2168c84d72456f785a1fbc66926c20c180011e342f5fe261dc048147d5dc40e"} Feb 19 09:53:17 crc kubenswrapper[4780]: I0219 09:53:17.060313 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:17 crc kubenswrapper[4780]: I0219 09:53:17.077975 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:17 crc kubenswrapper[4780]: I0219 09:53:17.863613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96bd89b2-5989-4451-86e2-9a92c57390fa","Type":"ContainerStarted","Data":"1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f"} Feb 19 09:53:17 crc kubenswrapper[4780]: I0219 09:53:17.875564 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 09:53:17 crc kubenswrapper[4780]: I0219 09:53:17.892542 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.892505205 podStartE2EDuration="2.892505205s" podCreationTimestamp="2026-02-19 09:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:17.890009171 +0000 UTC m=+5540.633666630" watchObservedRunningTime="2026-02-19 09:53:17.892505205 +0000 UTC m=+5540.636162644" Feb 19 09:53:18 crc kubenswrapper[4780]: I0219 09:53:18.291992 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 09:53:18 crc kubenswrapper[4780]: I0219 09:53:18.498692 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 09:53:20 crc kubenswrapper[4780]: I0219 09:53:20.311859 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:53:20 crc kubenswrapper[4780]: I0219 09:53:20.313801 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 09:53:20 crc kubenswrapper[4780]: I0219 09:53:20.345718 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:53:20 crc kubenswrapper[4780]: I0219 09:53:20.346579 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 09:53:21 crc kubenswrapper[4780]: I0219 09:53:21.270798 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 09:53:21 crc kubenswrapper[4780]: I0219 09:53:21.479426 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:53:21 crc kubenswrapper[4780]: I0219 09:53:21.479578 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:53:21 crc kubenswrapper[4780]: I0219 09:53:21.479581 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:53:21 crc kubenswrapper[4780]: I0219 09:53:21.479514 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.061104 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.063245 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.066746 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.079667 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.198853 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.199247 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.199287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.199316 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.199362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6rx8\" (UniqueName: \"kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.199428 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.301410 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.302255 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.302279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.302320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.302348 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.302401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6rx8\" (UniqueName: \"kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.303112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.307552 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.307804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.308636 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.313437 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.330318 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6rx8\" (UniqueName: \"kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8\") pod \"cinder-scheduler-0\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.448384 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.913462 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:24 crc kubenswrapper[4780]: I0219 09:53:24.925411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerStarted","Data":"1f21d8f4c2ed10d04dd7e88d598fd0eb0a992f29f92c7469ccadd33642d8fa37"} Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.713038 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.713680 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api-log" containerID="cri-o://95c82788501d5f7a44387bb6bd1914be2bffac7a4c40486cfe0e67dcc4572351" gracePeriod=30 Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.714058 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api" containerID="cri-o://62c8507ccef87cdf2ceefe5078cd2da8763f3d648af48437ab86e3f9c49842f9" gracePeriod=30 Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.948364 4780 generic.go:334] "Generic (PLEG): container finished" podID="ebefc259-5da4-4803-b476-6bdaa0191385" containerID="95c82788501d5f7a44387bb6bd1914be2bffac7a4c40486cfe0e67dcc4572351" exitCode=143 Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.959574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerStarted","Data":"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872"} Feb 19 09:53:25 crc kubenswrapper[4780]: I0219 09:53:25.959612 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerDied","Data":"95c82788501d5f7a44387bb6bd1914be2bffac7a4c40486cfe0e67dcc4572351"} Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.189751 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.192189 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.195311 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.212485 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.238967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239036 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239101 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239166 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239190 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239301 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239391 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239418 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239445 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-run\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.239492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75svs\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-kube-api-access-75svs\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.270632 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.307726 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341656 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341733 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341824 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341839 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341879 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341936 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.341983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342065 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342107 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-run\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75svs\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-kube-api-access-75svs\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342283 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342342 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342700 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-run\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.342936 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.343027 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.343201 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.343242 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.343276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/88c02885-785b-46df-bbe7-259243eee84a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.346584 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.347217 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.347558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.348684 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.359220 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75svs\" (UniqueName: \"kubernetes.io/projected/88c02885-785b-46df-bbe7-259243eee84a-kube-api-access-75svs\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.359414 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c02885-785b-46df-bbe7-259243eee84a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"88c02885-785b-46df-bbe7-259243eee84a\") " pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.511885 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.950868 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.958524 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.964152 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.970715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerStarted","Data":"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8"} Feb 19 09:53:26 crc kubenswrapper[4780]: I0219 09:53:26.987176 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.010238 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.029191 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.029169041 podStartE2EDuration="3.029169041s" podCreationTimestamp="2026-02-19 09:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:27.026664957 +0000 UTC m=+5549.770322406" watchObservedRunningTime="2026-02-19 09:53:27.029169041 +0000 UTC m=+5549.772826490" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.062969 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063014 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-dev\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063060 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-lib-modules\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063109 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-run\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063163 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-sys\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063187 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-ceph\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063254 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-scripts\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsb4k\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-kube-api-access-fsb4k\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063349 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063368 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.063395 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.124922 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.164943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-lib-modules\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-run\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165077 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165099 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-sys\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-lib-modules\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165210 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165264 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-ceph\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-scripts\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165381 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165428 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsb4k\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-kube-api-access-fsb4k\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165481 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165504 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165672 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-dev\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.165782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-dev\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166200 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-run\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166245 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166260 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.166405 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30e71513-4012-4b35-8571-9349d75bed4b-sys\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.171091 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.171116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.171112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-config-data\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.171521 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-ceph\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.175685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e71513-4012-4b35-8571-9349d75bed4b-scripts\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.185932 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsb4k\" (UniqueName: \"kubernetes.io/projected/30e71513-4012-4b35-8571-9349d75bed4b-kube-api-access-fsb4k\") pod \"cinder-backup-0\" (UID: \"30e71513-4012-4b35-8571-9349d75bed4b\") " pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.282763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.633414 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.635741 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.669274 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.690479 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.690541 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vss59\" (UniqueName: \"kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.690782 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.792581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.792634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vss59\" (UniqueName: \"kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.792697 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.793422 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.793479 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.847140 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vss59\" (UniqueName: \"kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59\") pod \"certified-operators-h7gq2\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.912894 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.959066 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:27 crc kubenswrapper[4780]: I0219 09:53:27.982485 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"88c02885-785b-46df-bbe7-259243eee84a","Type":"ContainerStarted","Data":"40c971395dca6d23cb7e310bf742a3d88a08f2f1f7d4c4e48bafdedca87ae007"} Feb 19 09:53:28 crc kubenswrapper[4780]: I0219 09:53:28.738470 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:28 crc kubenswrapper[4780]: I0219 09:53:28.905573 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.73:8776/healthcheck\": read tcp 10.217.0.2:44522->10.217.1.73:8776: read: connection reset by peer" Feb 19 09:53:28 crc kubenswrapper[4780]: I0219 09:53:28.994550 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"88c02885-785b-46df-bbe7-259243eee84a","Type":"ContainerStarted","Data":"abf84d4f35c52fd1e7abf9d610567de55024b7f4a83821e0b4a310afe04910ab"} Feb 19 09:53:28 crc kubenswrapper[4780]: I0219 09:53:28.996708 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"30e71513-4012-4b35-8571-9349d75bed4b","Type":"ContainerStarted","Data":"1a485b4e0aef340874de858742afd387d64d98a3fd599b205058e91b50bce9b3"} Feb 19 09:53:28 crc kubenswrapper[4780]: I0219 09:53:28.997980 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerStarted","Data":"ab7dad8b420dfe486cb207cd11e0c8adf7e0c1b3ecc3f3a71eacd620cecba698"} Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.001351 4780 generic.go:334] "Generic (PLEG): container finished" podID="ebefc259-5da4-4803-b476-6bdaa0191385" containerID="62c8507ccef87cdf2ceefe5078cd2da8763f3d648af48437ab86e3f9c49842f9" exitCode=0 Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.001384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerDied","Data":"62c8507ccef87cdf2ceefe5078cd2da8763f3d648af48437ab86e3f9c49842f9"} Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.448690 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.540452 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.650569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.650824 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.650947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2bn4\" (UniqueName: \"kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.651002 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.651044 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.651118 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.651220 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs\") pod \"ebefc259-5da4-4803-b476-6bdaa0191385\" (UID: \"ebefc259-5da4-4803-b476-6bdaa0191385\") " Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.652084 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.652643 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs" (OuterVolumeSpecName: "logs") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.653748 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebefc259-5da4-4803-b476-6bdaa0191385-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.653776 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebefc259-5da4-4803-b476-6bdaa0191385-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.660245 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.661597 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4" (OuterVolumeSpecName: "kube-api-access-t2bn4") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "kube-api-access-t2bn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.686254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts" (OuterVolumeSpecName: "scripts") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.724036 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.727153 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data" (OuterVolumeSpecName: "config-data") pod "ebefc259-5da4-4803-b476-6bdaa0191385" (UID: "ebefc259-5da4-4803-b476-6bdaa0191385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.756935 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.759783 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.761191 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.761271 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2bn4\" (UniqueName: \"kubernetes.io/projected/ebefc259-5da4-4803-b476-6bdaa0191385-kube-api-access-t2bn4\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:29 crc kubenswrapper[4780]: I0219 09:53:29.761332 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebefc259-5da4-4803-b476-6bdaa0191385-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.019969 4780 generic.go:334] "Generic (PLEG): container finished" podID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerID="e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166" exitCode=0 Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.020430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerDied","Data":"e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166"} Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.027276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebefc259-5da4-4803-b476-6bdaa0191385","Type":"ContainerDied","Data":"c86c76b22eb88bc79bc86a4dabbbb8fd6ff9977b8e49db04e649ae2db320b007"} Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.027324 4780 scope.go:117] "RemoveContainer" containerID="62c8507ccef87cdf2ceefe5078cd2da8763f3d648af48437ab86e3f9c49842f9" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.027466 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.038782 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"88c02885-785b-46df-bbe7-259243eee84a","Type":"ContainerStarted","Data":"bf00897bc1dc8f8d6d60abe4aba65cda9508a62c07cc5124792d0c73e7a276f9"} Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.074556 4780 scope.go:117] "RemoveContainer" containerID="95c82788501d5f7a44387bb6bd1914be2bffac7a4c40486cfe0e67dcc4572351" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.091644 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.859469797 podStartE2EDuration="4.091618975s" podCreationTimestamp="2026-02-19 09:53:26 +0000 UTC" firstStartedPulling="2026-02-19 09:53:27.130107589 +0000 UTC m=+5549.873765048" lastFinishedPulling="2026-02-19 09:53:28.362256777 +0000 UTC m=+5551.105914226" observedRunningTime="2026-02-19 09:53:30.084259455 +0000 UTC m=+5552.827916924" watchObservedRunningTime="2026-02-19 09:53:30.091618975 +0000 UTC m=+5552.835276434" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.124941 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.138482 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.147947 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:30 crc kubenswrapper[4780]: E0219 09:53:30.148500 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.148526 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api" Feb 19 09:53:30 crc kubenswrapper[4780]: E0219 09:53:30.148549 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api-log" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.148558 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api-log" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.148810 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api-log" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.148858 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" containerName="cinder-api" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.150217 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.155169 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.158053 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279562 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279657 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279680 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17681618-f82f-482e-a791-2eaa61b665b9-logs\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-scripts\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279828 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17681618-f82f-482e-a791-2eaa61b665b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.279869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxm8l\" (UniqueName: \"kubernetes.io/projected/17681618-f82f-482e-a791-2eaa61b665b9-kube-api-access-zxm8l\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.315598 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.315980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.320556 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.335299 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.349922 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.353492 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.353550 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381302 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381348 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17681618-f82f-482e-a791-2eaa61b665b9-logs\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381446 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-scripts\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17681618-f82f-482e-a791-2eaa61b665b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.381488 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxm8l\" (UniqueName: \"kubernetes.io/projected/17681618-f82f-482e-a791-2eaa61b665b9-kube-api-access-zxm8l\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.384088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17681618-f82f-482e-a791-2eaa61b665b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.387565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17681618-f82f-482e-a791-2eaa61b665b9-logs\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.388731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.390977 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-scripts\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.391160 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.394219 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17681618-f82f-482e-a791-2eaa61b665b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.412745 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxm8l\" (UniqueName: \"kubernetes.io/projected/17681618-f82f-482e-a791-2eaa61b665b9-kube-api-access-zxm8l\") pod \"cinder-api-0\" (UID: \"17681618-f82f-482e-a791-2eaa61b665b9\") " pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.468652 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 09:53:30 crc kubenswrapper[4780]: I0219 09:53:30.980014 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.093546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"30e71513-4012-4b35-8571-9349d75bed4b","Type":"ContainerStarted","Data":"09c60bbe984df3ca61fb3c8655aa11535d74240853388e5dc7e8cab73666f8e8"} Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.093588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"30e71513-4012-4b35-8571-9349d75bed4b","Type":"ContainerStarted","Data":"4f1c9231637de6ef46bcad0d6874fd3d70a95e5837244ae16a002ce3f3c16fa4"} Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.106250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17681618-f82f-482e-a791-2eaa61b665b9","Type":"ContainerStarted","Data":"175f8ad47d4f5cc6493c0a193a8c7617e59fab9d6759dbd38916f418686fd457"} Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.107210 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.110565 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.111297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.131177 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.455582691 podStartE2EDuration="5.130818756s" podCreationTimestamp="2026-02-19 09:53:26 +0000 UTC" firstStartedPulling="2026-02-19 09:53:28.050947193 +0000 UTC m=+5550.794604642" lastFinishedPulling="2026-02-19 09:53:29.726183258 +0000 UTC m=+5552.469840707" observedRunningTime="2026-02-19 09:53:31.127115111 +0000 UTC m=+5553.870772560" watchObservedRunningTime="2026-02-19 09:53:31.130818756 +0000 UTC m=+5553.874476205" Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.512980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:31 crc kubenswrapper[4780]: I0219 09:53:31.960460 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebefc259-5da4-4803-b476-6bdaa0191385" path="/var/lib/kubelet/pods/ebefc259-5da4-4803-b476-6bdaa0191385/volumes" Feb 19 09:53:32 crc kubenswrapper[4780]: I0219 09:53:32.146703 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17681618-f82f-482e-a791-2eaa61b665b9","Type":"ContainerStarted","Data":"c06ab121ea0d0f26f07ee84b5d3385c36479a66a7da3dd4d5f9dc417f5c138d1"} Feb 19 09:53:32 crc kubenswrapper[4780]: I0219 09:53:32.284629 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 19 09:53:33 crc kubenswrapper[4780]: I0219 09:53:33.158636 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17681618-f82f-482e-a791-2eaa61b665b9","Type":"ContainerStarted","Data":"2770f9f372fbda31bc7625e53b60f5aadb0b8820db00c7404020e48f4d2793dd"} Feb 19 09:53:33 crc kubenswrapper[4780]: I0219 09:53:33.160353 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 09:53:33 crc kubenswrapper[4780]: I0219 09:53:33.161231 4780 generic.go:334] "Generic (PLEG): container finished" podID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerID="4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884" exitCode=0 Feb 19 09:53:33 crc kubenswrapper[4780]: I0219 09:53:33.161331 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerDied","Data":"4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884"} Feb 19 09:53:33 crc kubenswrapper[4780]: I0219 09:53:33.188963 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.188943606 podStartE2EDuration="3.188943606s" podCreationTimestamp="2026-02-19 09:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:33.180380496 +0000 UTC m=+5555.924037945" watchObservedRunningTime="2026-02-19 09:53:33.188943606 +0000 UTC m=+5555.932601045" Feb 19 09:53:34 crc kubenswrapper[4780]: I0219 09:53:34.173750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerStarted","Data":"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e"} Feb 19 09:53:34 crc kubenswrapper[4780]: I0219 09:53:34.670780 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 09:53:34 crc kubenswrapper[4780]: I0219 09:53:34.693062 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h7gq2" podStartSLOduration=4.14844405 podStartE2EDuration="7.693042305s" podCreationTimestamp="2026-02-19 09:53:27 +0000 UTC" firstStartedPulling="2026-02-19 09:53:30.021983902 +0000 UTC m=+5552.765641351" lastFinishedPulling="2026-02-19 09:53:33.566582157 +0000 UTC m=+5556.310239606" observedRunningTime="2026-02-19 09:53:34.207522537 +0000 UTC m=+5556.951179996" watchObservedRunningTime="2026-02-19 09:53:34.693042305 +0000 UTC m=+5557.436699754" Feb 19 09:53:34 crc kubenswrapper[4780]: I0219 09:53:34.713007 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:35 crc kubenswrapper[4780]: I0219 09:53:35.182947 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="cinder-scheduler" containerID="cri-o://212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872" gracePeriod=30 Feb 19 09:53:35 crc kubenswrapper[4780]: I0219 09:53:35.183019 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="probe" containerID="cri-o://b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8" gracePeriod=30 Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.198879 4780 generic.go:334] "Generic (PLEG): container finished" podID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerID="b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8" exitCode=0 Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.198968 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerDied","Data":"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8"} Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.603891 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.714581 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.714844 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.714898 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.714957 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6rx8\" (UniqueName: \"kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.715022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.715122 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom\") pod \"f730af23-7b78-4b11-9ec6-aa5a06513600\" (UID: \"f730af23-7b78-4b11-9ec6-aa5a06513600\") " Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.716856 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.722923 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8" (OuterVolumeSpecName: "kube-api-access-b6rx8") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "kube-api-access-b6rx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.729678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.733420 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.748433 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts" (OuterVolumeSpecName: "scripts") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.818109 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f730af23-7b78-4b11-9ec6-aa5a06513600-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.818171 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.818186 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6rx8\" (UniqueName: \"kubernetes.io/projected/f730af23-7b78-4b11-9ec6-aa5a06513600-kube-api-access-b6rx8\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.818202 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.825285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.866175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data" (OuterVolumeSpecName: "config-data") pod "f730af23-7b78-4b11-9ec6-aa5a06513600" (UID: "f730af23-7b78-4b11-9ec6-aa5a06513600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.919800 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:36 crc kubenswrapper[4780]: I0219 09:53:36.919844 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f730af23-7b78-4b11-9ec6-aa5a06513600-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.210270 4780 generic.go:334] "Generic (PLEG): container finished" podID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerID="212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872" exitCode=0 Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.210357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerDied","Data":"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872"} Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.210412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f730af23-7b78-4b11-9ec6-aa5a06513600","Type":"ContainerDied","Data":"1f21d8f4c2ed10d04dd7e88d598fd0eb0a992f29f92c7469ccadd33642d8fa37"} Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.210431 4780 scope.go:117] "RemoveContainer" containerID="b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.210364 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.235620 4780 scope.go:117] "RemoveContainer" containerID="212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.263979 4780 scope.go:117] "RemoveContainer" containerID="b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.265188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:37 crc kubenswrapper[4780]: E0219 09:53:37.270951 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8\": container with ID starting with b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8 not found: ID does not exist" containerID="b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.271014 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8"} err="failed to get container status \"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8\": rpc error: code = NotFound desc = could not find container \"b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8\": container with ID starting with b402c1f5ee5004cb02f8c4458367749f255de0206a44aae42c865e379fae1ca8 not found: ID does not exist" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.271051 4780 scope.go:117] "RemoveContainer" containerID="212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872" Feb 19 09:53:37 crc kubenswrapper[4780]: E0219 09:53:37.271654 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872\": container with ID starting with 212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872 not found: ID does not exist" containerID="212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.271678 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872"} err="failed to get container status \"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872\": rpc error: code = NotFound desc = could not find container \"212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872\": container with ID starting with 212c72dbb3cc75b6195de595bdfc4aedf7300e3418a7f16c2501f92f25565872 not found: ID does not exist" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.295859 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.329612 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:37 crc kubenswrapper[4780]: E0219 09:53:37.330134 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="cinder-scheduler" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.330153 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="cinder-scheduler" Feb 19 09:53:37 crc kubenswrapper[4780]: E0219 09:53:37.330170 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="probe" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.330178 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="probe" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.330408 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="cinder-scheduler" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.330429 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" containerName="probe" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.332212 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.334526 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.341407 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.431881 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crc8d\" (UniqueName: \"kubernetes.io/projected/1b7508c7-4a7b-4c69-ac07-655e84e602e5-kube-api-access-crc8d\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.431981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.432271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.432426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.432610 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b7508c7-4a7b-4c69-ac07-655e84e602e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.432653 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.519195 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b7508c7-4a7b-4c69-ac07-655e84e602e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534593 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534655 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crc8d\" (UniqueName: \"kubernetes.io/projected/1b7508c7-4a7b-4c69-ac07-655e84e602e5-kube-api-access-crc8d\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b7508c7-4a7b-4c69-ac07-655e84e602e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.534846 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.539142 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.539510 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.540574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.546869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b7508c7-4a7b-4c69-ac07-655e84e602e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.557020 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crc8d\" (UniqueName: \"kubernetes.io/projected/1b7508c7-4a7b-4c69-ac07-655e84e602e5-kube-api-access-crc8d\") pod \"cinder-scheduler-0\" (UID: \"1b7508c7-4a7b-4c69-ac07-655e84e602e5\") " pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.653183 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.957022 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f730af23-7b78-4b11-9ec6-aa5a06513600" path="/var/lib/kubelet/pods/f730af23-7b78-4b11-9ec6-aa5a06513600/volumes" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.960277 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:37 crc kubenswrapper[4780]: I0219 09:53:37.960311 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:38 crc kubenswrapper[4780]: I0219 09:53:38.008653 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:38 crc kubenswrapper[4780]: I0219 09:53:38.143120 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 09:53:38 crc kubenswrapper[4780]: I0219 09:53:38.232064 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1b7508c7-4a7b-4c69-ac07-655e84e602e5","Type":"ContainerStarted","Data":"c056e6e164f86eb0ebad3fdbe1c8640b0bb97587558e69767ea7d86f67175b45"} Feb 19 09:53:38 crc kubenswrapper[4780]: I0219 09:53:38.283174 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:38 crc kubenswrapper[4780]: I0219 09:53:38.336474 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:39 crc kubenswrapper[4780]: I0219 09:53:39.249621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1b7508c7-4a7b-4c69-ac07-655e84e602e5","Type":"ContainerStarted","Data":"6c13dca26e4dc114fe966a935bede0c4d6585deae6eff94874e5907c1bb363be"} Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.258639 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h7gq2" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="registry-server" containerID="cri-o://ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e" gracePeriod=2 Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.260290 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1b7508c7-4a7b-4c69-ac07-655e84e602e5","Type":"ContainerStarted","Data":"b5d55c701f334d0bedd0eb83921ea987724f70d97fc790896965e28833c6e710"} Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.298850 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.298826459 podStartE2EDuration="3.298826459s" podCreationTimestamp="2026-02-19 09:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:53:40.287474426 +0000 UTC m=+5563.031131875" watchObservedRunningTime="2026-02-19 09:53:40.298826459 +0000 UTC m=+5563.042483908" Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.753643 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.902196 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content\") pod \"297d76df-6747-48c9-80d5-5501eb37a9e8\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.902416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities\") pod \"297d76df-6747-48c9-80d5-5501eb37a9e8\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.902472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vss59\" (UniqueName: \"kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59\") pod \"297d76df-6747-48c9-80d5-5501eb37a9e8\" (UID: \"297d76df-6747-48c9-80d5-5501eb37a9e8\") " Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.903290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities" (OuterVolumeSpecName: "utilities") pod "297d76df-6747-48c9-80d5-5501eb37a9e8" (UID: "297d76df-6747-48c9-80d5-5501eb37a9e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.908067 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59" (OuterVolumeSpecName: "kube-api-access-vss59") pod "297d76df-6747-48c9-80d5-5501eb37a9e8" (UID: "297d76df-6747-48c9-80d5-5501eb37a9e8"). InnerVolumeSpecName "kube-api-access-vss59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:53:40 crc kubenswrapper[4780]: I0219 09:53:40.971443 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "297d76df-6747-48c9-80d5-5501eb37a9e8" (UID: "297d76df-6747-48c9-80d5-5501eb37a9e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.004594 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vss59\" (UniqueName: \"kubernetes.io/projected/297d76df-6747-48c9-80d5-5501eb37a9e8-kube-api-access-vss59\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.004632 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.004644 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297d76df-6747-48c9-80d5-5501eb37a9e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.268944 4780 generic.go:334] "Generic (PLEG): container finished" podID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerID="ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e" exitCode=0 Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.270547 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7gq2" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.274232 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerDied","Data":"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e"} Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.274317 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7gq2" event={"ID":"297d76df-6747-48c9-80d5-5501eb37a9e8","Type":"ContainerDied","Data":"ab7dad8b420dfe486cb207cd11e0c8adf7e0c1b3ecc3f3a71eacd620cecba698"} Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.274353 4780 scope.go:117] "RemoveContainer" containerID="ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.315730 4780 scope.go:117] "RemoveContainer" containerID="4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.326655 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.343939 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h7gq2"] Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.345180 4780 scope.go:117] "RemoveContainer" containerID="e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.363546 4780 scope.go:117] "RemoveContainer" containerID="ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e" Feb 19 09:53:41 crc kubenswrapper[4780]: E0219 09:53:41.364096 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e\": container with ID starting with ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e not found: ID does not exist" containerID="ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.364201 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e"} err="failed to get container status \"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e\": rpc error: code = NotFound desc = could not find container \"ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e\": container with ID starting with ee9b305a810e16e0a3e0bbb7b6322b2089512ab355e8f25ecc34053085399b4e not found: ID does not exist" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.364380 4780 scope.go:117] "RemoveContainer" containerID="4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884" Feb 19 09:53:41 crc kubenswrapper[4780]: E0219 09:53:41.364843 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884\": container with ID starting with 4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884 not found: ID does not exist" containerID="4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.364876 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884"} err="failed to get container status \"4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884\": rpc error: code = NotFound desc = could not find container \"4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884\": container with ID starting with 4ff865ac0844ac0409a78239ab351a976a7a34107ba4df82b4b9351944780884 not found: ID does not exist" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.364894 4780 scope.go:117] "RemoveContainer" containerID="e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166" Feb 19 09:53:41 crc kubenswrapper[4780]: E0219 09:53:41.365395 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166\": container with ID starting with e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166 not found: ID does not exist" containerID="e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.365420 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166"} err="failed to get container status \"e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166\": rpc error: code = NotFound desc = could not find container \"e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166\": container with ID starting with e4639e30af5cc9be784750dea8afe6f96ae70617954dbbfe8067d069aed1b166 not found: ID does not exist" Feb 19 09:53:41 crc kubenswrapper[4780]: I0219 09:53:41.955994 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" path="/var/lib/kubelet/pods/297d76df-6747-48c9-80d5-5501eb37a9e8/volumes" Feb 19 09:53:42 crc kubenswrapper[4780]: I0219 09:53:42.257029 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 09:53:42 crc kubenswrapper[4780]: I0219 09:53:42.653625 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 09:53:47 crc kubenswrapper[4780]: I0219 09:53:47.858321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 09:54:36 crc kubenswrapper[4780]: I0219 09:54:36.336359 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:54:36 crc kubenswrapper[4780]: I0219 09:54:36.337802 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:55:06 crc kubenswrapper[4780]: I0219 09:55:06.108067 4780 scope.go:117] "RemoveContainer" containerID="71774290b0d877a0fbc1f5b6be2a5b0040f42c15dc98f83069a6b45edf660201" Feb 19 09:55:06 crc kubenswrapper[4780]: I0219 09:55:06.143448 4780 scope.go:117] "RemoveContainer" containerID="53d9c8d019593ca7fe6037c39d01d5d9b297c16b3f2f65eca288de60884544a8" Feb 19 09:55:06 crc kubenswrapper[4780]: I0219 09:55:06.336728 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:55:06 crc kubenswrapper[4780]: I0219 09:55:06.337058 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:55:14 crc kubenswrapper[4780]: I0219 09:55:14.063889 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6v8gc"] Feb 19 09:55:14 crc kubenswrapper[4780]: I0219 09:55:14.081030 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-eac9-account-create-update-7q7gr"] Feb 19 09:55:14 crc kubenswrapper[4780]: I0219 09:55:14.091938 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6v8gc"] Feb 19 09:55:14 crc kubenswrapper[4780]: I0219 09:55:14.101059 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-eac9-account-create-update-7q7gr"] Feb 19 09:55:15 crc kubenswrapper[4780]: I0219 09:55:15.963715 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562e7a0b-d308-4e21-9164-8ef5ff574af9" path="/var/lib/kubelet/pods/562e7a0b-d308-4e21-9164-8ef5ff574af9/volumes" Feb 19 09:55:15 crc kubenswrapper[4780]: I0219 09:55:15.965257 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a944e3-b129-4cc4-9792-e07bb079f89c" path="/var/lib/kubelet/pods/e4a944e3-b129-4cc4-9792-e07bb079f89c/volumes" Feb 19 09:55:19 crc kubenswrapper[4780]: I0219 09:55:19.041945 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f8ppv"] Feb 19 09:55:19 crc kubenswrapper[4780]: I0219 09:55:19.054087 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f8ppv"] Feb 19 09:55:19 crc kubenswrapper[4780]: I0219 09:55:19.960599 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a54176-4984-4c7b-b0e1-f424d7cbd298" path="/var/lib/kubelet/pods/f3a54176-4984-4c7b-b0e1-f424d7cbd298/volumes" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.813786 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4lqmn"] Feb 19 09:55:25 crc kubenswrapper[4780]: E0219 09:55:25.814871 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="extract-content" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.814891 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="extract-content" Feb 19 09:55:25 crc kubenswrapper[4780]: E0219 09:55:25.814930 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="extract-utilities" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.814939 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="extract-utilities" Feb 19 09:55:25 crc kubenswrapper[4780]: E0219 09:55:25.814967 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="registry-server" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.814975 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="registry-server" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.815231 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="297d76df-6747-48c9-80d5-5501eb37a9e8" containerName="registry-server" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.815978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.822258 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6xl8f" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.823079 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.830333 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn"] Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872346 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-log-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872415 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgww\" (UniqueName: \"kubernetes.io/projected/417d0039-dd62-4b81-bcb7-5859c1d11b4e-kube-api-access-dxgww\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872452 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872620 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/417d0039-dd62-4b81-bcb7-5859c1d11b4e-scripts\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872654 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.872680 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l695t"] Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.874485 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.921657 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l695t"] Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977270 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-etc-ovs\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-log\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/417d0039-dd62-4b81-bcb7-5859c1d11b4e-scripts\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977414 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-lib\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977463 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-log-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpsf\" (UniqueName: \"kubernetes.io/projected/ed01b93b-9b96-45fd-ac68-1ca3e9891906-kube-api-access-2fpsf\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977503 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxgww\" (UniqueName: \"kubernetes.io/projected/417d0039-dd62-4b81-bcb7-5859c1d11b4e-kube-api-access-dxgww\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977561 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-run\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.977581 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed01b93b-9b96-45fd-ac68-1ca3e9891906-scripts\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.978093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.978190 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-log-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.978230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d0039-dd62-4b81-bcb7-5859c1d11b4e-var-run-ovn\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:25 crc kubenswrapper[4780]: I0219 09:55:25.979514 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/417d0039-dd62-4b81-bcb7-5859c1d11b4e-scripts\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.023061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxgww\" (UniqueName: \"kubernetes.io/projected/417d0039-dd62-4b81-bcb7-5859c1d11b4e-kube-api-access-dxgww\") pod \"ovn-controller-4lqmn\" (UID: \"417d0039-dd62-4b81-bcb7-5859c1d11b4e\") " pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.078812 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-run\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.078861 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed01b93b-9b96-45fd-ac68-1ca3e9891906-scripts\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.078931 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-etc-ovs\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.079012 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-log\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.079048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-lib\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.079089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpsf\" (UniqueName: \"kubernetes.io/projected/ed01b93b-9b96-45fd-ac68-1ca3e9891906-kube-api-access-2fpsf\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.079797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-run\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.081621 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed01b93b-9b96-45fd-ac68-1ca3e9891906-scripts\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.083258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-log\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.083336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-var-lib\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.083976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed01b93b-9b96-45fd-ac68-1ca3e9891906-etc-ovs\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.105384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpsf\" (UniqueName: \"kubernetes.io/projected/ed01b93b-9b96-45fd-ac68-1ca3e9891906-kube-api-access-2fpsf\") pod \"ovn-controller-ovs-l695t\" (UID: \"ed01b93b-9b96-45fd-ac68-1ca3e9891906\") " pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.132428 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.211441 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:26 crc kubenswrapper[4780]: I0219 09:55:26.615643 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn"] Feb 19 09:55:26 crc kubenswrapper[4780]: W0219 09:55:26.615809 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod417d0039_dd62_4b81_bcb7_5859c1d11b4e.slice/crio-41e8e4d0423998969cf3859cfc9ad41e242c505edb6909817bf3d459562cbe1a WatchSource:0}: Error finding container 41e8e4d0423998969cf3859cfc9ad41e242c505edb6909817bf3d459562cbe1a: Status 404 returned error can't find the container with id 41e8e4d0423998969cf3859cfc9ad41e242c505edb6909817bf3d459562cbe1a Feb 19 09:55:27 crc kubenswrapper[4780]: W0219 09:55:27.144914 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded01b93b_9b96_45fd_ac68_1ca3e9891906.slice/crio-35d00db102d98638e2225c1ffac710aa26b7241e7b2976e4cceeb2822132756e WatchSource:0}: Error finding container 35d00db102d98638e2225c1ffac710aa26b7241e7b2976e4cceeb2822132756e: Status 404 returned error can't find the container with id 35d00db102d98638e2225c1ffac710aa26b7241e7b2976e4cceeb2822132756e Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.146340 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l695t"] Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.425651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn" event={"ID":"417d0039-dd62-4b81-bcb7-5859c1d11b4e","Type":"ContainerStarted","Data":"96218f1657274de0fe05f5b93f3a79c183bc47e2453e24596979ffa98f5339fb"} Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.425940 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn" event={"ID":"417d0039-dd62-4b81-bcb7-5859c1d11b4e","Type":"ContainerStarted","Data":"41e8e4d0423998969cf3859cfc9ad41e242c505edb6909817bf3d459562cbe1a"} Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.426221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4lqmn" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.429584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l695t" event={"ID":"ed01b93b-9b96-45fd-ac68-1ca3e9891906","Type":"ContainerStarted","Data":"35d00db102d98638e2225c1ffac710aa26b7241e7b2976e4cceeb2822132756e"} Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.432967 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t8bv6"] Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.434115 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.441323 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t8bv6"] Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.443716 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.469380 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4lqmn" podStartSLOduration=2.469020747 podStartE2EDuration="2.469020747s" podCreationTimestamp="2026-02-19 09:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:27.459557944 +0000 UTC m=+5670.203215403" watchObservedRunningTime="2026-02-19 09:55:27.469020747 +0000 UTC m=+5670.212678196" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.506784 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovn-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.506846 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovs-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.507142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-config\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.507223 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnp4\" (UniqueName: \"kubernetes.io/projected/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-kube-api-access-kgnp4\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.608661 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-config\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.608724 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnp4\" (UniqueName: \"kubernetes.io/projected/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-kube-api-access-kgnp4\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.608777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovn-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.608803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovs-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.609113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovn-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.609140 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-ovs-rundir\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.609454 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-config\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.626367 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnp4\" (UniqueName: \"kubernetes.io/projected/e1682f87-dd9a-4fc6-96df-f50c80a4af9e-kube-api-access-kgnp4\") pod \"ovn-controller-metrics-t8bv6\" (UID: \"e1682f87-dd9a-4fc6-96df-f50c80a4af9e\") " pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:27 crc kubenswrapper[4780]: I0219 09:55:27.767827 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t8bv6" Feb 19 09:55:28 crc kubenswrapper[4780]: I0219 09:55:28.238568 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t8bv6"] Feb 19 09:55:28 crc kubenswrapper[4780]: W0219 09:55:28.242115 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1682f87_dd9a_4fc6_96df_f50c80a4af9e.slice/crio-717bd3c05811a663bee247935111d1dfa029a75429f72862127dd1766113ae70 WatchSource:0}: Error finding container 717bd3c05811a663bee247935111d1dfa029a75429f72862127dd1766113ae70: Status 404 returned error can't find the container with id 717bd3c05811a663bee247935111d1dfa029a75429f72862127dd1766113ae70 Feb 19 09:55:28 crc kubenswrapper[4780]: I0219 09:55:28.439217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t8bv6" event={"ID":"e1682f87-dd9a-4fc6-96df-f50c80a4af9e","Type":"ContainerStarted","Data":"717bd3c05811a663bee247935111d1dfa029a75429f72862127dd1766113ae70"} Feb 19 09:55:28 crc kubenswrapper[4780]: I0219 09:55:28.441402 4780 generic.go:334] "Generic (PLEG): container finished" podID="ed01b93b-9b96-45fd-ac68-1ca3e9891906" containerID="03f2fcf5d54e24ddccf90bfd514c4eb518d54e8949fc241c42732f83617aa931" exitCode=0 Feb 19 09:55:28 crc kubenswrapper[4780]: I0219 09:55:28.441450 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l695t" event={"ID":"ed01b93b-9b96-45fd-ac68-1ca3e9891906","Type":"ContainerDied","Data":"03f2fcf5d54e24ddccf90bfd514c4eb518d54e8949fc241c42732f83617aa931"} Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.455063 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l695t" event={"ID":"ed01b93b-9b96-45fd-ac68-1ca3e9891906","Type":"ContainerStarted","Data":"0704d2313c3faff8513121ade22bbe3e5e859d0b477dc6b97d5cdb021cca5b46"} Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.455588 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.455601 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l695t" event={"ID":"ed01b93b-9b96-45fd-ac68-1ca3e9891906","Type":"ContainerStarted","Data":"343157e6dde04d8655aaf1a7fac66023d41b284751c3bd6b6582fdb524a1b8f0"} Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.456816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t8bv6" event={"ID":"e1682f87-dd9a-4fc6-96df-f50c80a4af9e","Type":"ContainerStarted","Data":"d7041f69a12eb1315f2091bd011abb27238bfd772cbd154681e5d6c50a8e5bd3"} Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.492648 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l695t" podStartSLOduration=4.4926157700000005 podStartE2EDuration="4.49261577s" podCreationTimestamp="2026-02-19 09:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:29.47387662 +0000 UTC m=+5672.217534079" watchObservedRunningTime="2026-02-19 09:55:29.49261577 +0000 UTC m=+5672.236273229" Feb 19 09:55:29 crc kubenswrapper[4780]: I0219 09:55:29.504625 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t8bv6" podStartSLOduration=2.504601745 podStartE2EDuration="2.504601745s" podCreationTimestamp="2026-02-19 09:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:29.495261845 +0000 UTC m=+5672.238919294" watchObservedRunningTime="2026-02-19 09:55:29.504601745 +0000 UTC m=+5672.248259214" Feb 19 09:55:30 crc kubenswrapper[4780]: I0219 09:55:30.465478 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:55:32 crc kubenswrapper[4780]: I0219 09:55:32.044785 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-54kg5"] Feb 19 09:55:32 crc kubenswrapper[4780]: I0219 09:55:32.065675 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-54kg5"] Feb 19 09:55:33 crc kubenswrapper[4780]: I0219 09:55:33.955970 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba05d2b1-a1c5-473a-ac1c-9b60da468ade" path="/var/lib/kubelet/pods/ba05d2b1-a1c5-473a-ac1c-9b60da468ade/volumes" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.336178 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.336486 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.336531 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.337106 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.337212 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" gracePeriod=600 Feb 19 09:55:36 crc kubenswrapper[4780]: E0219 09:55:36.459542 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.534060 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" exitCode=0 Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.534107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df"} Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.534222 4780 scope.go:117] "RemoveContainer" containerID="e96f03e4144d44e4b89b473b042e09edbd6f26be94b76f997b0d0a3b99266763" Feb 19 09:55:36 crc kubenswrapper[4780]: I0219 09:55:36.535047 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:55:36 crc kubenswrapper[4780]: E0219 09:55:36.535469 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:55:46 crc kubenswrapper[4780]: I0219 09:55:46.939658 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:55:46 crc kubenswrapper[4780]: E0219 09:55:46.940411 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.672788 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-2xfhc"] Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.676404 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.692821 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-2xfhc"] Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.766514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72xb\" (UniqueName: \"kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.766722 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.869267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.869504 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x72xb\" (UniqueName: \"kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.870883 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:54 crc kubenswrapper[4780]: I0219 09:55:54.898669 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72xb\" (UniqueName: \"kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb\") pod \"octavia-db-create-2xfhc\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:55 crc kubenswrapper[4780]: I0219 09:55:55.015488 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:55 crc kubenswrapper[4780]: I0219 09:55:55.518166 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-2xfhc"] Feb 19 09:55:55 crc kubenswrapper[4780]: W0219 09:55:55.528548 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde46373_6de9_4921_8d5a_d0231ca24aa4.slice/crio-5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa WatchSource:0}: Error finding container 5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa: Status 404 returned error can't find the container with id 5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa Feb 19 09:55:55 crc kubenswrapper[4780]: I0219 09:55:55.768300 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-2xfhc" event={"ID":"bde46373-6de9-4921-8d5a-d0231ca24aa4","Type":"ContainerStarted","Data":"ec5b152c9ee958b2b78e53351e3ab1db7417042b3622bab63f1db562e973f4c1"} Feb 19 09:55:55 crc kubenswrapper[4780]: I0219 09:55:55.768347 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-2xfhc" event={"ID":"bde46373-6de9-4921-8d5a-d0231ca24aa4","Type":"ContainerStarted","Data":"5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa"} Feb 19 09:55:55 crc kubenswrapper[4780]: I0219 09:55:55.788952 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-2xfhc" podStartSLOduration=1.788918601 podStartE2EDuration="1.788918601s" podCreationTimestamp="2026-02-19 09:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:55.784921492 +0000 UTC m=+5698.528578981" watchObservedRunningTime="2026-02-19 09:55:55.788918601 +0000 UTC m=+5698.532576070" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.004297 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-c55c-account-create-update-zz228"] Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.005760 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.012504 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.035970 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c55c-account-create-update-zz228"] Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.102849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.102960 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8l8\" (UniqueName: \"kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.204168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8l8\" (UniqueName: \"kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.204353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.205271 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.239499 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8l8\" (UniqueName: \"kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8\") pod \"octavia-c55c-account-create-update-zz228\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.356245 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.779925 4780 generic.go:334] "Generic (PLEG): container finished" podID="bde46373-6de9-4921-8d5a-d0231ca24aa4" containerID="ec5b152c9ee958b2b78e53351e3ab1db7417042b3622bab63f1db562e973f4c1" exitCode=0 Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.780065 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-2xfhc" event={"ID":"bde46373-6de9-4921-8d5a-d0231ca24aa4","Type":"ContainerDied","Data":"ec5b152c9ee958b2b78e53351e3ab1db7417042b3622bab63f1db562e973f4c1"} Feb 19 09:55:56 crc kubenswrapper[4780]: I0219 09:55:56.997151 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c55c-account-create-update-zz228"] Feb 19 09:55:57 crc kubenswrapper[4780]: W0219 09:55:57.003996 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1987e82_c3a2_49d9_b234_06252c4b17c2.slice/crio-3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a WatchSource:0}: Error finding container 3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a: Status 404 returned error can't find the container with id 3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a Feb 19 09:55:57 crc kubenswrapper[4780]: I0219 09:55:57.793919 4780 generic.go:334] "Generic (PLEG): container finished" podID="f1987e82-c3a2-49d9-b234-06252c4b17c2" containerID="810a0c7581deab1ae63299efa3f63211086a9f75f805e8485a135839fef7580a" exitCode=0 Feb 19 09:55:57 crc kubenswrapper[4780]: I0219 09:55:57.794014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c55c-account-create-update-zz228" event={"ID":"f1987e82-c3a2-49d9-b234-06252c4b17c2","Type":"ContainerDied","Data":"810a0c7581deab1ae63299efa3f63211086a9f75f805e8485a135839fef7580a"} Feb 19 09:55:57 crc kubenswrapper[4780]: I0219 09:55:57.794394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c55c-account-create-update-zz228" event={"ID":"f1987e82-c3a2-49d9-b234-06252c4b17c2","Type":"ContainerStarted","Data":"3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a"} Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.279388 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.344249 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts\") pod \"bde46373-6de9-4921-8d5a-d0231ca24aa4\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.344423 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x72xb\" (UniqueName: \"kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb\") pod \"bde46373-6de9-4921-8d5a-d0231ca24aa4\" (UID: \"bde46373-6de9-4921-8d5a-d0231ca24aa4\") " Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.345038 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bde46373-6de9-4921-8d5a-d0231ca24aa4" (UID: "bde46373-6de9-4921-8d5a-d0231ca24aa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.352318 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb" (OuterVolumeSpecName: "kube-api-access-x72xb") pod "bde46373-6de9-4921-8d5a-d0231ca24aa4" (UID: "bde46373-6de9-4921-8d5a-d0231ca24aa4"). InnerVolumeSpecName "kube-api-access-x72xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.447167 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde46373-6de9-4921-8d5a-d0231ca24aa4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.447234 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x72xb\" (UniqueName: \"kubernetes.io/projected/bde46373-6de9-4921-8d5a-d0231ca24aa4-kube-api-access-x72xb\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.806744 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-2xfhc" event={"ID":"bde46373-6de9-4921-8d5a-d0231ca24aa4","Type":"ContainerDied","Data":"5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa"} Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.806768 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-2xfhc" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.807212 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b338b7004af5eaa519efd8b0b0c0f591a3355b1b3d2e8760f1f3c061cf04eaa" Feb 19 09:55:58 crc kubenswrapper[4780]: I0219 09:55:58.939377 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:55:58 crc kubenswrapper[4780]: E0219 09:55:58.939987 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.221570 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.264254 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts\") pod \"f1987e82-c3a2-49d9-b234-06252c4b17c2\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.264313 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8l8\" (UniqueName: \"kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8\") pod \"f1987e82-c3a2-49d9-b234-06252c4b17c2\" (UID: \"f1987e82-c3a2-49d9-b234-06252c4b17c2\") " Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.265005 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1987e82-c3a2-49d9-b234-06252c4b17c2" (UID: "f1987e82-c3a2-49d9-b234-06252c4b17c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.274016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8" (OuterVolumeSpecName: "kube-api-access-zg8l8") pod "f1987e82-c3a2-49d9-b234-06252c4b17c2" (UID: "f1987e82-c3a2-49d9-b234-06252c4b17c2"). InnerVolumeSpecName "kube-api-access-zg8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.367274 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg8l8\" (UniqueName: \"kubernetes.io/projected/f1987e82-c3a2-49d9-b234-06252c4b17c2-kube-api-access-zg8l8\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.367540 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1987e82-c3a2-49d9-b234-06252c4b17c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.817544 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c55c-account-create-update-zz228" event={"ID":"f1987e82-c3a2-49d9-b234-06252c4b17c2","Type":"ContainerDied","Data":"3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a"} Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.817819 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b255561964679978b119594f7cc517fae5ccf67f25490a04599914b9324b74a" Feb 19 09:55:59 crc kubenswrapper[4780]: I0219 09:55:59.817615 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c55c-account-create-update-zz228" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.177244 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4lqmn" podUID="417d0039-dd62-4b81-bcb7-5859c1d11b4e" containerName="ovn-controller" probeResult="failure" output=< Feb 19 09:56:01 crc kubenswrapper[4780]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 09:56:01 crc kubenswrapper[4780]: > Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.253186 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.255017 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l695t" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.378524 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4lqmn-config-rd8sh"] Feb 19 09:56:01 crc kubenswrapper[4780]: E0219 09:56:01.379171 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde46373-6de9-4921-8d5a-d0231ca24aa4" containerName="mariadb-database-create" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.379289 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde46373-6de9-4921-8d5a-d0231ca24aa4" containerName="mariadb-database-create" Feb 19 09:56:01 crc kubenswrapper[4780]: E0219 09:56:01.379369 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1987e82-c3a2-49d9-b234-06252c4b17c2" containerName="mariadb-account-create-update" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.379422 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1987e82-c3a2-49d9-b234-06252c4b17c2" containerName="mariadb-account-create-update" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.379642 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde46373-6de9-4921-8d5a-d0231ca24aa4" containerName="mariadb-database-create" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.379732 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1987e82-c3a2-49d9-b234-06252c4b17c2" containerName="mariadb-account-create-update" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.380589 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.395111 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn-config-rd8sh"] Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.395389 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529371 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529460 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529548 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529616 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2dpk\" (UniqueName: \"kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.529750 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631802 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631863 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631887 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631904 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2dpk\" (UniqueName: \"kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.631965 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.632116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.632116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.632150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.632652 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.634586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.654315 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2dpk\" (UniqueName: \"kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk\") pod \"ovn-controller-4lqmn-config-rd8sh\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:01 crc kubenswrapper[4780]: I0219 09:56:01.756851 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.107474 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-q6njm"] Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.108861 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.120390 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q6njm"] Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.240550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn-config-rd8sh"] Feb 19 09:56:02 crc kubenswrapper[4780]: W0219 09:56:02.243558 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb674eda1_b35a_4b73_ad6d_b3686daf78e9.slice/crio-0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474 WatchSource:0}: Error finding container 0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474: Status 404 returned error can't find the container with id 0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474 Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.248677 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvnsj\" (UniqueName: \"kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.249923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.352253 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.352418 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvnsj\" (UniqueName: \"kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.353320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.373495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvnsj\" (UniqueName: \"kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj\") pod \"octavia-persistence-db-create-q6njm\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.426749 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.632708 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-0c3c-account-create-update-ljnws"] Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.634585 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.639060 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.650133 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-0c3c-account-create-update-ljnws"] Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.779272 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682hm\" (UniqueName: \"kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.779332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.848060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-rd8sh" event={"ID":"b674eda1-b35a-4b73-ad6d-b3686daf78e9","Type":"ContainerStarted","Data":"ca6b6f0620dc49912cda17c2f631d21794cca1d4557cbae385aae4af38370c8f"} Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.848165 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-rd8sh" event={"ID":"b674eda1-b35a-4b73-ad6d-b3686daf78e9","Type":"ContainerStarted","Data":"0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474"} Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.870878 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4lqmn-config-rd8sh" podStartSLOduration=1.870849512 podStartE2EDuration="1.870849512s" podCreationTimestamp="2026-02-19 09:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:56:02.870531884 +0000 UTC m=+5705.614189343" watchObservedRunningTime="2026-02-19 09:56:02.870849512 +0000 UTC m=+5705.614506961" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.881956 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682hm\" (UniqueName: \"kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.882044 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.883395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.909035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682hm\" (UniqueName: \"kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm\") pod \"octavia-0c3c-account-create-update-ljnws\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:02 crc kubenswrapper[4780]: I0219 09:56:02.970306 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q6njm"] Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.005754 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.494259 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-0c3c-account-create-update-ljnws"] Feb 19 09:56:03 crc kubenswrapper[4780]: W0219 09:56:03.530437 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8078e9b0_3cbc_4fc3_8305_aee96f30eadc.slice/crio-e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557 WatchSource:0}: Error finding container e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557: Status 404 returned error can't find the container with id e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557 Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.857007 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0c3c-account-create-update-ljnws" event={"ID":"8078e9b0-3cbc-4fc3-8305-aee96f30eadc","Type":"ContainerStarted","Data":"2bbc1941c72f26ab918e336bb78d50ba7cfe4fa8bcb0747ea4e1e479ee94b66a"} Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.857054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0c3c-account-create-update-ljnws" event={"ID":"8078e9b0-3cbc-4fc3-8305-aee96f30eadc","Type":"ContainerStarted","Data":"e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557"} Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.859072 4780 generic.go:334] "Generic (PLEG): container finished" podID="b674eda1-b35a-4b73-ad6d-b3686daf78e9" containerID="ca6b6f0620dc49912cda17c2f631d21794cca1d4557cbae385aae4af38370c8f" exitCode=0 Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.859152 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-rd8sh" event={"ID":"b674eda1-b35a-4b73-ad6d-b3686daf78e9","Type":"ContainerDied","Data":"ca6b6f0620dc49912cda17c2f631d21794cca1d4557cbae385aae4af38370c8f"} Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.860974 4780 generic.go:334] "Generic (PLEG): container finished" podID="100678d8-ee54-41f3-ba9b-b37a79cc7385" containerID="51e012ff1ad5ff5ccc59e9bbb595ee0ad582349068d8a5f778223610a5208623" exitCode=0 Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.861004 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q6njm" event={"ID":"100678d8-ee54-41f3-ba9b-b37a79cc7385","Type":"ContainerDied","Data":"51e012ff1ad5ff5ccc59e9bbb595ee0ad582349068d8a5f778223610a5208623"} Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.861021 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q6njm" event={"ID":"100678d8-ee54-41f3-ba9b-b37a79cc7385","Type":"ContainerStarted","Data":"a0bf4f45e37184de3368d101a38f62f221377d10a4d6896ccf59d3b0f018fc0a"} Feb 19 09:56:03 crc kubenswrapper[4780]: I0219 09:56:03.917059 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-0c3c-account-create-update-ljnws" podStartSLOduration=1.917040774 podStartE2EDuration="1.917040774s" podCreationTimestamp="2026-02-19 09:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:56:03.908590647 +0000 UTC m=+5706.652248096" watchObservedRunningTime="2026-02-19 09:56:03.917040774 +0000 UTC m=+5706.660698213" Feb 19 09:56:04 crc kubenswrapper[4780]: I0219 09:56:04.873825 4780 generic.go:334] "Generic (PLEG): container finished" podID="8078e9b0-3cbc-4fc3-8305-aee96f30eadc" containerID="2bbc1941c72f26ab918e336bb78d50ba7cfe4fa8bcb0747ea4e1e479ee94b66a" exitCode=0 Feb 19 09:56:04 crc kubenswrapper[4780]: I0219 09:56:04.874304 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0c3c-account-create-update-ljnws" event={"ID":"8078e9b0-3cbc-4fc3-8305-aee96f30eadc","Type":"ContainerDied","Data":"2bbc1941c72f26ab918e336bb78d50ba7cfe4fa8bcb0747ea4e1e479ee94b66a"} Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.427663 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.434983 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.539860 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.539949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.540014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2dpk\" (UniqueName: \"kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.540034 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.540117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run" (OuterVolumeSpecName: "var-run") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.540285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.540084 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.541408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.541503 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts\") pod \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\" (UID: \"b674eda1-b35a-4b73-ad6d-b3686daf78e9\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.541696 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvnsj\" (UniqueName: \"kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj\") pod \"100678d8-ee54-41f3-ba9b-b37a79cc7385\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.541824 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts\") pod \"100678d8-ee54-41f3-ba9b-b37a79cc7385\" (UID: \"100678d8-ee54-41f3-ba9b-b37a79cc7385\") " Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.542886 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.542923 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.542944 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b674eda1-b35a-4b73-ad6d-b3686daf78e9-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.543654 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "100678d8-ee54-41f3-ba9b-b37a79cc7385" (UID: "100678d8-ee54-41f3-ba9b-b37a79cc7385"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.543877 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts" (OuterVolumeSpecName: "scripts") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.544090 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.546630 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk" (OuterVolumeSpecName: "kube-api-access-x2dpk") pod "b674eda1-b35a-4b73-ad6d-b3686daf78e9" (UID: "b674eda1-b35a-4b73-ad6d-b3686daf78e9"). InnerVolumeSpecName "kube-api-access-x2dpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.546686 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj" (OuterVolumeSpecName: "kube-api-access-hvnsj") pod "100678d8-ee54-41f3-ba9b-b37a79cc7385" (UID: "100678d8-ee54-41f3-ba9b-b37a79cc7385"). InnerVolumeSpecName "kube-api-access-hvnsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.644970 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.645010 4780 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b674eda1-b35a-4b73-ad6d-b3686daf78e9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.645020 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvnsj\" (UniqueName: \"kubernetes.io/projected/100678d8-ee54-41f3-ba9b-b37a79cc7385-kube-api-access-hvnsj\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.645033 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/100678d8-ee54-41f3-ba9b-b37a79cc7385-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.645044 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2dpk\" (UniqueName: \"kubernetes.io/projected/b674eda1-b35a-4b73-ad6d-b3686daf78e9-kube-api-access-x2dpk\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.886294 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-rd8sh" event={"ID":"b674eda1-b35a-4b73-ad6d-b3686daf78e9","Type":"ContainerDied","Data":"0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474"} Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.886384 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbb83a350d33465db27475269a9ca4b9a566bc0e8af78f31634f34865bae474" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.886330 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-rd8sh" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.888897 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q6njm" event={"ID":"100678d8-ee54-41f3-ba9b-b37a79cc7385","Type":"ContainerDied","Data":"a0bf4f45e37184de3368d101a38f62f221377d10a4d6896ccf59d3b0f018fc0a"} Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.888961 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bf4f45e37184de3368d101a38f62f221377d10a4d6896ccf59d3b0f018fc0a" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.888898 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q6njm" Feb 19 09:56:05 crc kubenswrapper[4780]: I0219 09:56:05.987076 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4lqmn-config-rd8sh"] Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.006346 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4lqmn-config-rd8sh"] Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.108510 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4lqmn-config-27z7s"] Feb 19 09:56:06 crc kubenswrapper[4780]: E0219 09:56:06.109005 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b674eda1-b35a-4b73-ad6d-b3686daf78e9" containerName="ovn-config" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.109024 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b674eda1-b35a-4b73-ad6d-b3686daf78e9" containerName="ovn-config" Feb 19 09:56:06 crc kubenswrapper[4780]: E0219 09:56:06.109055 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100678d8-ee54-41f3-ba9b-b37a79cc7385" containerName="mariadb-database-create" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.109062 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="100678d8-ee54-41f3-ba9b-b37a79cc7385" containerName="mariadb-database-create" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.109303 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b674eda1-b35a-4b73-ad6d-b3686daf78e9" containerName="ovn-config" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.109330 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="100678d8-ee54-41f3-ba9b-b37a79cc7385" containerName="mariadb-database-create" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.110065 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.115462 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.116414 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn-config-27z7s"] Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154726 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2l9r\" (UniqueName: \"kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154757 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.154840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.169588 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4lqmn" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.180880 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.209376 4780 scope.go:117] "RemoveContainer" containerID="a62cddb3fd3e65899b6d0f40592f0fdef51aed70bbfd1c6252e59c88a70bda68" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.248201 4780 scope.go:117] "RemoveContainer" containerID="b573016ed4567b6cb698488ebbdbb00c2b4294f9aa3d6ca315dc9066e4b88a24" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.255988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-682hm\" (UniqueName: \"kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm\") pod \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256149 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts\") pod \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\" (UID: \"8078e9b0-3cbc-4fc3-8305-aee96f30eadc\") " Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256605 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256698 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256764 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8078e9b0-3cbc-4fc3-8305-aee96f30eadc" (UID: "8078e9b0-3cbc-4fc3-8305-aee96f30eadc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256855 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2l9r\" (UniqueName: \"kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.256939 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.257049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.257164 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.257284 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.257621 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.257635 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.259215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.259918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.265295 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm" (OuterVolumeSpecName: "kube-api-access-682hm") pod "8078e9b0-3cbc-4fc3-8305-aee96f30eadc" (UID: "8078e9b0-3cbc-4fc3-8305-aee96f30eadc"). InnerVolumeSpecName "kube-api-access-682hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.273287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2l9r\" (UniqueName: \"kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r\") pod \"ovn-controller-4lqmn-config-27z7s\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.283819 4780 scope.go:117] "RemoveContainer" containerID="7bbbdc68804af2d2db47fbb2ff8d5ac71409d0bf2b86edd6dbd3b2f564218811" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.358898 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-682hm\" (UniqueName: \"kubernetes.io/projected/8078e9b0-3cbc-4fc3-8305-aee96f30eadc-kube-api-access-682hm\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.359107 4780 scope.go:117] "RemoveContainer" containerID="920f2d5e33fb27396faffd182eacfc25ea97e70eb90883b34936198cc54df173" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.439184 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.903665 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0c3c-account-create-update-ljnws" event={"ID":"8078e9b0-3cbc-4fc3-8305-aee96f30eadc","Type":"ContainerDied","Data":"e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557"} Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.903742 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94e275134856049211354eeb0e47b5a4f70d4b45dfed3aced22c8fe53b72557" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.903761 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0c3c-account-create-update-ljnws" Feb 19 09:56:06 crc kubenswrapper[4780]: I0219 09:56:06.930536 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4lqmn-config-27z7s"] Feb 19 09:56:06 crc kubenswrapper[4780]: W0219 09:56:06.932774 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ccbf89_fe7e_4263_af00_1aee06e6e29b.slice/crio-402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8 WatchSource:0}: Error finding container 402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8: Status 404 returned error can't find the container with id 402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8 Feb 19 09:56:07 crc kubenswrapper[4780]: I0219 09:56:07.932946 4780 generic.go:334] "Generic (PLEG): container finished" podID="21ccbf89-fe7e-4263-af00-1aee06e6e29b" containerID="418bc78d66b7dad6743fa5f4f31a15c61ce668dc6012da205bc64c332f66e444" exitCode=0 Feb 19 09:56:07 crc kubenswrapper[4780]: I0219 09:56:07.933025 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-27z7s" event={"ID":"21ccbf89-fe7e-4263-af00-1aee06e6e29b","Type":"ContainerDied","Data":"418bc78d66b7dad6743fa5f4f31a15c61ce668dc6012da205bc64c332f66e444"} Feb 19 09:56:07 crc kubenswrapper[4780]: I0219 09:56:07.933431 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-27z7s" event={"ID":"21ccbf89-fe7e-4263-af00-1aee06e6e29b","Type":"ContainerStarted","Data":"402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8"} Feb 19 09:56:07 crc kubenswrapper[4780]: I0219 09:56:07.964581 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b674eda1-b35a-4b73-ad6d-b3686daf78e9" path="/var/lib/kubelet/pods/b674eda1-b35a-4b73-ad6d-b3686daf78e9/volumes" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.425219 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-cc57564bf-2fkwj"] Feb 19 09:56:08 crc kubenswrapper[4780]: E0219 09:56:08.425577 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8078e9b0-3cbc-4fc3-8305-aee96f30eadc" containerName="mariadb-account-create-update" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.427115 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8078e9b0-3cbc-4fc3-8305-aee96f30eadc" containerName="mariadb-account-create-update" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.427350 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8078e9b0-3cbc-4fc3-8305-aee96f30eadc" containerName="mariadb-account-create-update" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.428586 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.431633 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-ncdtm" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.432464 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.432483 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.443439 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-cc57564bf-2fkwj"] Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.507799 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data-merged\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.508063 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-combined-ca-bundle\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.508087 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.508158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-scripts\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.508209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-octavia-run\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.609986 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-scripts\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.610074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-octavia-run\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.610151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data-merged\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.610198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-combined-ca-bundle\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.610222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.611031 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-octavia-run\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.611173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data-merged\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.617760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-config-data\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.617793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-scripts\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.622525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea7c27-2325-4ff4-95f3-beb9d2aff6d1-combined-ca-bundle\") pod \"octavia-api-cc57564bf-2fkwj\" (UID: \"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1\") " pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:08 crc kubenswrapper[4780]: I0219 09:56:08.755149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.307005 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-cc57564bf-2fkwj"] Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.324381 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.328807 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.442201 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.442644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.442345 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run" (OuterVolumeSpecName: "var-run") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.442813 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.443081 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.443171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.443433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2l9r\" (UniqueName: \"kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.443527 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn\") pod \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\" (UID: \"21ccbf89-fe7e-4263-af00-1aee06e6e29b\") " Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.443575 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.444453 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.444542 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.444609 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21ccbf89-fe7e-4263-af00-1aee06e6e29b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.446279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.447915 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts" (OuterVolumeSpecName: "scripts") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.451117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r" (OuterVolumeSpecName: "kube-api-access-x2l9r") pod "21ccbf89-fe7e-4263-af00-1aee06e6e29b" (UID: "21ccbf89-fe7e-4263-af00-1aee06e6e29b"). InnerVolumeSpecName "kube-api-access-x2l9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.549600 4780 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.549871 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21ccbf89-fe7e-4263-af00-1aee06e6e29b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.549929 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2l9r\" (UniqueName: \"kubernetes.io/projected/21ccbf89-fe7e-4263-af00-1aee06e6e29b-kube-api-access-x2l9r\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.979651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-cc57564bf-2fkwj" event={"ID":"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1","Type":"ContainerStarted","Data":"ab908c257b2192ef4b838edd125730b43a6eaa4260db5954a0fbbbff964b58b8"} Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.981233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4lqmn-config-27z7s" event={"ID":"21ccbf89-fe7e-4263-af00-1aee06e6e29b","Type":"ContainerDied","Data":"402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8"} Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.981258 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402558d302b8069d05ee4853aae0ab63d9247a439bbbd602a96b091aa47862b8" Feb 19 09:56:09 crc kubenswrapper[4780]: I0219 09:56:09.981317 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4lqmn-config-27z7s" Feb 19 09:56:10 crc kubenswrapper[4780]: I0219 09:56:10.404562 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4lqmn-config-27z7s"] Feb 19 09:56:10 crc kubenswrapper[4780]: I0219 09:56:10.415374 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4lqmn-config-27z7s"] Feb 19 09:56:11 crc kubenswrapper[4780]: I0219 09:56:11.952405 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ccbf89-fe7e-4263-af00-1aee06e6e29b" path="/var/lib/kubelet/pods/21ccbf89-fe7e-4263-af00-1aee06e6e29b/volumes" Feb 19 09:56:13 crc kubenswrapper[4780]: I0219 09:56:13.937928 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:56:13 crc kubenswrapper[4780]: E0219 09:56:13.938360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:56:18 crc kubenswrapper[4780]: I0219 09:56:18.073942 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-cc57564bf-2fkwj" event={"ID":"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1","Type":"ContainerStarted","Data":"585e02035779a53efa13fd9d05da4580bd1fdc53dbc5689d9c9fa16ad243544d"} Feb 19 09:56:19 crc kubenswrapper[4780]: I0219 09:56:19.088522 4780 generic.go:334] "Generic (PLEG): container finished" podID="81ea7c27-2325-4ff4-95f3-beb9d2aff6d1" containerID="585e02035779a53efa13fd9d05da4580bd1fdc53dbc5689d9c9fa16ad243544d" exitCode=0 Feb 19 09:56:19 crc kubenswrapper[4780]: I0219 09:56:19.088639 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-cc57564bf-2fkwj" event={"ID":"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1","Type":"ContainerDied","Data":"585e02035779a53efa13fd9d05da4580bd1fdc53dbc5689d9c9fa16ad243544d"} Feb 19 09:56:20 crc kubenswrapper[4780]: I0219 09:56:20.106228 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-cc57564bf-2fkwj" event={"ID":"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1","Type":"ContainerStarted","Data":"b491a1538d10827c33762d1639684347ae02edac5d2048bf5e03170e09431813"} Feb 19 09:56:20 crc kubenswrapper[4780]: I0219 09:56:20.106651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-cc57564bf-2fkwj" event={"ID":"81ea7c27-2325-4ff4-95f3-beb9d2aff6d1","Type":"ContainerStarted","Data":"15d5ab5da1e5b836e0a6f221390030713be1f8e3c01ff949c47cf7e2d2cbb112"} Feb 19 09:56:20 crc kubenswrapper[4780]: I0219 09:56:20.106700 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:20 crc kubenswrapper[4780]: I0219 09:56:20.106731 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:20 crc kubenswrapper[4780]: I0219 09:56:20.133621 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-cc57564bf-2fkwj" podStartSLOduration=3.766088718 podStartE2EDuration="12.133605577s" podCreationTimestamp="2026-02-19 09:56:08 +0000 UTC" firstStartedPulling="2026-02-19 09:56:09.324159279 +0000 UTC m=+5712.067816738" lastFinishedPulling="2026-02-19 09:56:17.691676128 +0000 UTC m=+5720.435333597" observedRunningTime="2026-02-19 09:56:20.129183528 +0000 UTC m=+5722.872840987" watchObservedRunningTime="2026-02-19 09:56:20.133605577 +0000 UTC m=+5722.877263026" Feb 19 09:56:26 crc kubenswrapper[4780]: I0219 09:56:26.938891 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:56:26 crc kubenswrapper[4780]: E0219 09:56:26.939866 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.210486 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-nx6fv"] Feb 19 09:56:28 crc kubenswrapper[4780]: E0219 09:56:28.212500 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ccbf89-fe7e-4263-af00-1aee06e6e29b" containerName="ovn-config" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.212568 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ccbf89-fe7e-4263-af00-1aee06e6e29b" containerName="ovn-config" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.213196 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ccbf89-fe7e-4263-af00-1aee06e6e29b" containerName="ovn-config" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.215069 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.224251 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.224278 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.224409 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.248689 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-nx6fv"] Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.315423 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data-merged\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.315667 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.315773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a0a6c5f6-8431-4468-9639-5f83a903d0ab-hm-ports\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.315916 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-scripts\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.417685 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a0a6c5f6-8431-4468-9639-5f83a903d0ab-hm-ports\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.417749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-scripts\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.417807 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data-merged\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.417963 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.418859 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data-merged\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.419403 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a0a6c5f6-8431-4468-9639-5f83a903d0ab-hm-ports\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.423556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-config-data\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.425063 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a6c5f6-8431-4468-9639-5f83a903d0ab-scripts\") pod \"octavia-rsyslog-nx6fv\" (UID: \"a0a6c5f6-8431-4468-9639-5f83a903d0ab\") " pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.548900 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.968760 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.973540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:28 crc kubenswrapper[4780]: I0219 09:56:28.982375 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.018888 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.041483 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.041627 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.128399 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-nx6fv"] Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.142833 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.142898 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.143937 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.150309 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config\") pod \"octavia-image-upload-8d4564f8f-kg45w\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.201916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-nx6fv" event={"ID":"a0a6c5f6-8431-4468-9639-5f83a903d0ab","Type":"ContainerStarted","Data":"57f1523f30837a895cbfa944f9ebaa907c8892077ccc32493e9d0891fd40ef68"} Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.291580 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-nx6fv"] Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.315768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:56:29 crc kubenswrapper[4780]: I0219 09:56:29.793868 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.220573 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-ml9nw"] Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.222739 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.233697 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.237283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerStarted","Data":"6c3d655bdc06e2e2bf8b08ebb7963faca8adab29de05d20cf27ba3bb97a23ecf"} Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.240665 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-ml9nw"] Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.368429 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.368540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.368730 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.368802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.470903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.470974 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.471041 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.471072 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.475642 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.477861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.477872 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.478915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle\") pod \"octavia-db-sync-ml9nw\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:30 crc kubenswrapper[4780]: I0219 09:56:30.584337 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:31 crc kubenswrapper[4780]: I0219 09:56:31.084048 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-ml9nw"] Feb 19 09:56:31 crc kubenswrapper[4780]: I0219 09:56:31.255674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-ml9nw" event={"ID":"237df558-e233-4dd2-a360-44cdbe273c41","Type":"ContainerStarted","Data":"eba91640e2f4376dcf15e4da141ce832bbc0fc5d874bce2f18900e693b465424"} Feb 19 09:56:31 crc kubenswrapper[4780]: I0219 09:56:31.258785 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-nx6fv" event={"ID":"a0a6c5f6-8431-4468-9639-5f83a903d0ab","Type":"ContainerStarted","Data":"ae9982e4f145327837abb8f3b7d5767dc207290b44e2214380d160a668c1bec7"} Feb 19 09:56:32 crc kubenswrapper[4780]: I0219 09:56:32.269583 4780 generic.go:334] "Generic (PLEG): container finished" podID="237df558-e233-4dd2-a360-44cdbe273c41" containerID="c425696c251937575105af05c55677a6e77f47be0c85e364329f983fab6425af" exitCode=0 Feb 19 09:56:32 crc kubenswrapper[4780]: I0219 09:56:32.269918 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-ml9nw" event={"ID":"237df558-e233-4dd2-a360-44cdbe273c41","Type":"ContainerDied","Data":"c425696c251937575105af05c55677a6e77f47be0c85e364329f983fab6425af"} Feb 19 09:56:33 crc kubenswrapper[4780]: I0219 09:56:33.281856 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-ml9nw" event={"ID":"237df558-e233-4dd2-a360-44cdbe273c41","Type":"ContainerStarted","Data":"51fe2ec23cfc01b285e72b4a844f01f393d6a22f846136d0c4507feea2508d0e"} Feb 19 09:56:33 crc kubenswrapper[4780]: I0219 09:56:33.285115 4780 generic.go:334] "Generic (PLEG): container finished" podID="a0a6c5f6-8431-4468-9639-5f83a903d0ab" containerID="ae9982e4f145327837abb8f3b7d5767dc207290b44e2214380d160a668c1bec7" exitCode=0 Feb 19 09:56:33 crc kubenswrapper[4780]: I0219 09:56:33.285175 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-nx6fv" event={"ID":"a0a6c5f6-8431-4468-9639-5f83a903d0ab","Type":"ContainerDied","Data":"ae9982e4f145327837abb8f3b7d5767dc207290b44e2214380d160a668c1bec7"} Feb 19 09:56:33 crc kubenswrapper[4780]: I0219 09:56:33.313778 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-ml9nw" podStartSLOduration=3.313758724 podStartE2EDuration="3.313758724s" podCreationTimestamp="2026-02-19 09:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:56:33.298289904 +0000 UTC m=+5736.041947363" watchObservedRunningTime="2026-02-19 09:56:33.313758724 +0000 UTC m=+5736.057416173" Feb 19 09:56:37 crc kubenswrapper[4780]: I0219 09:56:37.335954 4780 generic.go:334] "Generic (PLEG): container finished" podID="237df558-e233-4dd2-a360-44cdbe273c41" containerID="51fe2ec23cfc01b285e72b4a844f01f393d6a22f846136d0c4507feea2508d0e" exitCode=0 Feb 19 09:56:37 crc kubenswrapper[4780]: I0219 09:56:37.336279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-ml9nw" event={"ID":"237df558-e233-4dd2-a360-44cdbe273c41","Type":"ContainerDied","Data":"51fe2ec23cfc01b285e72b4a844f01f393d6a22f846136d0c4507feea2508d0e"} Feb 19 09:56:41 crc kubenswrapper[4780]: I0219 09:56:41.940885 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:56:41 crc kubenswrapper[4780]: E0219 09:56:41.941724 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.334645 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.392438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-ml9nw" event={"ID":"237df558-e233-4dd2-a360-44cdbe273c41","Type":"ContainerDied","Data":"eba91640e2f4376dcf15e4da141ce832bbc0fc5d874bce2f18900e693b465424"} Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.392482 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba91640e2f4376dcf15e4da141ce832bbc0fc5d874bce2f18900e693b465424" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.392547 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-ml9nw" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.528407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged\") pod \"237df558-e233-4dd2-a360-44cdbe273c41\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.528728 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data\") pod \"237df558-e233-4dd2-a360-44cdbe273c41\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.528968 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle\") pod \"237df558-e233-4dd2-a360-44cdbe273c41\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.529085 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts\") pod \"237df558-e233-4dd2-a360-44cdbe273c41\" (UID: \"237df558-e233-4dd2-a360-44cdbe273c41\") " Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.539412 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts" (OuterVolumeSpecName: "scripts") pod "237df558-e233-4dd2-a360-44cdbe273c41" (UID: "237df558-e233-4dd2-a360-44cdbe273c41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.546370 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data" (OuterVolumeSpecName: "config-data") pod "237df558-e233-4dd2-a360-44cdbe273c41" (UID: "237df558-e233-4dd2-a360-44cdbe273c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.566602 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "237df558-e233-4dd2-a360-44cdbe273c41" (UID: "237df558-e233-4dd2-a360-44cdbe273c41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.571863 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "237df558-e233-4dd2-a360-44cdbe273c41" (UID: "237df558-e233-4dd2-a360-44cdbe273c41"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.640366 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.640827 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/237df558-e233-4dd2-a360-44cdbe273c41-config-data-merged\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.640858 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:42 crc kubenswrapper[4780]: I0219 09:56:42.640873 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237df558-e233-4dd2-a360-44cdbe273c41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:43 crc kubenswrapper[4780]: I0219 09:56:43.408077 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-nx6fv" event={"ID":"a0a6c5f6-8431-4468-9639-5f83a903d0ab","Type":"ContainerStarted","Data":"aaa3823409be48f27d516acb5795a525fdac47778be2ca75430b459accbd729e"} Feb 19 09:56:43 crc kubenswrapper[4780]: I0219 09:56:43.408642 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:56:43 crc kubenswrapper[4780]: I0219 09:56:43.409891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerStarted","Data":"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4"} Feb 19 09:56:43 crc kubenswrapper[4780]: I0219 09:56:43.444048 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-nx6fv" podStartSLOduration=1.916014186 podStartE2EDuration="15.444025004s" podCreationTimestamp="2026-02-19 09:56:28 +0000 UTC" firstStartedPulling="2026-02-19 09:56:29.141778279 +0000 UTC m=+5731.885435728" lastFinishedPulling="2026-02-19 09:56:42.669789087 +0000 UTC m=+5745.413446546" observedRunningTime="2026-02-19 09:56:43.429032245 +0000 UTC m=+5746.172689704" watchObservedRunningTime="2026-02-19 09:56:43.444025004 +0000 UTC m=+5746.187682463" Feb 19 09:56:44 crc kubenswrapper[4780]: I0219 09:56:44.429115 4780 generic.go:334] "Generic (PLEG): container finished" podID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerID="c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4" exitCode=0 Feb 19 09:56:44 crc kubenswrapper[4780]: I0219 09:56:44.430513 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerDied","Data":"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4"} Feb 19 09:56:45 crc kubenswrapper[4780]: I0219 09:56:45.192396 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:45 crc kubenswrapper[4780]: I0219 09:56:45.214525 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-cc57564bf-2fkwj" Feb 19 09:56:45 crc kubenswrapper[4780]: I0219 09:56:45.440177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerStarted","Data":"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2"} Feb 19 09:56:45 crc kubenswrapper[4780]: I0219 09:56:45.458738 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" podStartSLOduration=4.470772977 podStartE2EDuration="17.458719898s" podCreationTimestamp="2026-02-19 09:56:28 +0000 UTC" firstStartedPulling="2026-02-19 09:56:29.804436621 +0000 UTC m=+5732.548094070" lastFinishedPulling="2026-02-19 09:56:42.792383542 +0000 UTC m=+5745.536040991" observedRunningTime="2026-02-19 09:56:45.453251443 +0000 UTC m=+5748.196908892" watchObservedRunningTime="2026-02-19 09:56:45.458719898 +0000 UTC m=+5748.202377347" Feb 19 09:56:55 crc kubenswrapper[4780]: I0219 09:56:55.944259 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:56:55 crc kubenswrapper[4780]: E0219 09:56:55.945437 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:56:58 crc kubenswrapper[4780]: I0219 09:56:58.592578 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-nx6fv" Feb 19 09:57:06 crc kubenswrapper[4780]: I0219 09:57:06.939114 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:57:06 crc kubenswrapper[4780]: E0219 09:57:06.940671 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:57:10 crc kubenswrapper[4780]: I0219 09:57:10.862608 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:57:10 crc kubenswrapper[4780]: I0219 09:57:10.863433 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="octavia-amphora-httpd" containerID="cri-o://edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2" gracePeriod=30 Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.509164 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.588435 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image\") pod \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.588579 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config\") pod \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\" (UID: \"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db\") " Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.627413 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" (UID: "8eea71ca-6b33-4c9b-8960-c05a0f9ba2db"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.682681 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" (UID: "8eea71ca-6b33-4c9b-8960-c05a0f9ba2db"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.691362 4780 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-amphora-image\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.691396 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.743764 4780 generic.go:334] "Generic (PLEG): container finished" podID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerID="edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2" exitCode=0 Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.743852 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.743850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerDied","Data":"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2"} Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.744239 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-kg45w" event={"ID":"8eea71ca-6b33-4c9b-8960-c05a0f9ba2db","Type":"ContainerDied","Data":"6c3d655bdc06e2e2bf8b08ebb7963faca8adab29de05d20cf27ba3bb97a23ecf"} Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.744272 4780 scope.go:117] "RemoveContainer" containerID="edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.772691 4780 scope.go:117] "RemoveContainer" containerID="c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.777590 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.787424 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-kg45w"] Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.792939 4780 scope.go:117] "RemoveContainer" containerID="edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2" Feb 19 09:57:11 crc kubenswrapper[4780]: E0219 09:57:11.793521 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2\": container with ID starting with edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2 not found: ID does not exist" containerID="edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.793572 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2"} err="failed to get container status \"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2\": rpc error: code = NotFound desc = could not find container \"edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2\": container with ID starting with edfa9d9a2224e2480db4b7b15859782f8fecac5535b9cbaec545377ae0af8db2 not found: ID does not exist" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.793601 4780 scope.go:117] "RemoveContainer" containerID="c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4" Feb 19 09:57:11 crc kubenswrapper[4780]: E0219 09:57:11.793925 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4\": container with ID starting with c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4 not found: ID does not exist" containerID="c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.793959 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4"} err="failed to get container status \"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4\": rpc error: code = NotFound desc = could not find container \"c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4\": container with ID starting with c386fa3309fc722e8ae5b7e66fe54602caf317af7eb36f620a3ac6ece3a8a0d4 not found: ID does not exist" Feb 19 09:57:11 crc kubenswrapper[4780]: I0219 09:57:11.953887 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" path="/var/lib/kubelet/pods/8eea71ca-6b33-4c9b-8960-c05a0f9ba2db/volumes" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433055 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-gpkh2"] Feb 19 09:57:15 crc kubenswrapper[4780]: E0219 09:57:15.433659 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237df558-e233-4dd2-a360-44cdbe273c41" containerName="octavia-db-sync" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433672 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="237df558-e233-4dd2-a360-44cdbe273c41" containerName="octavia-db-sync" Feb 19 09:57:15 crc kubenswrapper[4780]: E0219 09:57:15.433689 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237df558-e233-4dd2-a360-44cdbe273c41" containerName="init" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433695 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="237df558-e233-4dd2-a360-44cdbe273c41" containerName="init" Feb 19 09:57:15 crc kubenswrapper[4780]: E0219 09:57:15.433711 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="init" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433716 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="init" Feb 19 09:57:15 crc kubenswrapper[4780]: E0219 09:57:15.433724 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="octavia-amphora-httpd" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433730 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="octavia-amphora-httpd" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433914 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="237df558-e233-4dd2-a360-44cdbe273c41" containerName="octavia-db-sync" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.433931 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eea71ca-6b33-4c9b-8960-c05a0f9ba2db" containerName="octavia-amphora-httpd" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.434854 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.437915 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.458084 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-gpkh2"] Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.570310 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ab7a7d5-41e0-4452-9088-63530b72c172-httpd-config\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.570574 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ab7a7d5-41e0-4452-9088-63530b72c172-amphora-image\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.672798 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ab7a7d5-41e0-4452-9088-63530b72c172-httpd-config\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.673008 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ab7a7d5-41e0-4452-9088-63530b72c172-amphora-image\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.673474 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ab7a7d5-41e0-4452-9088-63530b72c172-amphora-image\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.680000 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ab7a7d5-41e0-4452-9088-63530b72c172-httpd-config\") pod \"octavia-image-upload-8d4564f8f-gpkh2\" (UID: \"2ab7a7d5-41e0-4452-9088-63530b72c172\") " pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:15 crc kubenswrapper[4780]: I0219 09:57:15.753102 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" Feb 19 09:57:16 crc kubenswrapper[4780]: I0219 09:57:16.229189 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-gpkh2"] Feb 19 09:57:16 crc kubenswrapper[4780]: I0219 09:57:16.810113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" event={"ID":"2ab7a7d5-41e0-4452-9088-63530b72c172","Type":"ContainerStarted","Data":"0ca035ebfd7f6dd932b17d81f303db8c57f014fa5ec1b5c657a7d29fd2ccaf1f"} Feb 19 09:57:17 crc kubenswrapper[4780]: I0219 09:57:17.820517 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" event={"ID":"2ab7a7d5-41e0-4452-9088-63530b72c172","Type":"ContainerStarted","Data":"2c9724f2f433a65afdec97ee22773d4e20fefc37760b636d312f8dfef005e941"} Feb 19 09:57:20 crc kubenswrapper[4780]: I0219 09:57:20.858947 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ab7a7d5-41e0-4452-9088-63530b72c172" containerID="2c9724f2f433a65afdec97ee22773d4e20fefc37760b636d312f8dfef005e941" exitCode=0 Feb 19 09:57:20 crc kubenswrapper[4780]: I0219 09:57:20.859028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" event={"ID":"2ab7a7d5-41e0-4452-9088-63530b72c172","Type":"ContainerDied","Data":"2c9724f2f433a65afdec97ee22773d4e20fefc37760b636d312f8dfef005e941"} Feb 19 09:57:21 crc kubenswrapper[4780]: I0219 09:57:21.877538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" event={"ID":"2ab7a7d5-41e0-4452-9088-63530b72c172","Type":"ContainerStarted","Data":"6a6c3e06a9138a9c4a754e4a56d1acb40bcfd6e5e9aa6a1225d20e8b4f6699af"} Feb 19 09:57:21 crc kubenswrapper[4780]: I0219 09:57:21.916390 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-gpkh2" podStartSLOduration=6.482726009 podStartE2EDuration="6.916370201s" podCreationTimestamp="2026-02-19 09:57:15 +0000 UTC" firstStartedPulling="2026-02-19 09:57:16.25632583 +0000 UTC m=+5778.999983269" lastFinishedPulling="2026-02-19 09:57:16.689970002 +0000 UTC m=+5779.433627461" observedRunningTime="2026-02-19 09:57:21.906851367 +0000 UTC m=+5784.650508856" watchObservedRunningTime="2026-02-19 09:57:21.916370201 +0000 UTC m=+5784.660027640" Feb 19 09:57:21 crc kubenswrapper[4780]: I0219 09:57:21.939202 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:57:21 crc kubenswrapper[4780]: E0219 09:57:21.939545 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.065932 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-q4924"] Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.069814 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.079517 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-q4924"] Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.106629 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.106975 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.106963 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.185945 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-scripts\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.186023 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-combined-ca-bundle\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.186052 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c7858366-1626-44dc-885d-78118fe2c43d-hm-ports\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.186092 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c7858366-1626-44dc-885d-78118fe2c43d-config-data-merged\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.186110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-config-data\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.186222 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-amphora-certs\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.288440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-amphora-certs\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.288988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-scripts\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.289203 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-combined-ca-bundle\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.289338 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c7858366-1626-44dc-885d-78118fe2c43d-hm-ports\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.289475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c7858366-1626-44dc-885d-78118fe2c43d-config-data-merged\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.289586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-config-data\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.290407 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c7858366-1626-44dc-885d-78118fe2c43d-config-data-merged\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.291761 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c7858366-1626-44dc-885d-78118fe2c43d-hm-ports\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.296528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-combined-ca-bundle\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.299591 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-config-data\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.302971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-amphora-certs\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.310002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7858366-1626-44dc-885d-78118fe2c43d-scripts\") pod \"octavia-healthmanager-q4924\" (UID: \"c7858366-1626-44dc-885d-78118fe2c43d\") " pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:33 crc kubenswrapper[4780]: I0219 09:57:33.436880 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.057881 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-q4924"] Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.667622 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-9x46n"] Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.669939 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.672607 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.678564 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.681669 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9x46n"] Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.819961 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data-merged\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.820039 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-hm-ports\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.820314 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-scripts\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.820522 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-combined-ca-bundle\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.820699 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-amphora-certs\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.820814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.923621 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data-merged\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data-merged\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924328 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-hm-ports\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924412 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-scripts\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924501 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-combined-ca-bundle\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924553 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-amphora-certs\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.924602 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.926216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-hm-ports\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.934517 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-combined-ca-bundle\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.934852 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-config-data\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.935557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-amphora-certs\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.940986 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6-scripts\") pod \"octavia-housekeeping-9x46n\" (UID: \"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6\") " pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:34 crc kubenswrapper[4780]: I0219 09:57:34.992353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.024623 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-q4924" event={"ID":"c7858366-1626-44dc-885d-78118fe2c43d","Type":"ContainerStarted","Data":"d5a169f0c06bb2fa3ac7efc529566b6112998e097b94e592939dbde5ede4e2e9"} Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.024671 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-q4924" event={"ID":"c7858366-1626-44dc-885d-78118fe2c43d","Type":"ContainerStarted","Data":"cde4cc92b0f87dcc9111894049d3e88cfb223b0f116a5a6309ce4fd06125891b"} Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.599704 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9x46n"] Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.935600 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-8fcqd"] Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.938340 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.940653 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.941029 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.955360 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:57:35 crc kubenswrapper[4780]: E0219 09:57:35.955746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:57:35 crc kubenswrapper[4780]: I0219 09:57:35.963222 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-8fcqd"] Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.039737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9x46n" event={"ID":"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6","Type":"ContainerStarted","Data":"1460d48fa59d4ce7efc1e77773d79c018b0fb90f046f311e899d4751e71ad8e6"} Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.049564 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/ca4ad79a-75c5-46de-8d49-56d2f5bab086-hm-ports\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.049841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data-merged\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.050016 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-combined-ca-bundle\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.050065 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.050100 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-scripts\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.051232 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-amphora-certs\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.153906 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-amphora-certs\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.154064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/ca4ad79a-75c5-46de-8d49-56d2f5bab086-hm-ports\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.154152 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data-merged\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.154199 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-combined-ca-bundle\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.154222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.154262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-scripts\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.155728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/ca4ad79a-75c5-46de-8d49-56d2f5bab086-hm-ports\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.156008 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data-merged\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.161884 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-combined-ca-bundle\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.163164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-amphora-certs\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.163242 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-config-data\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.165354 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4ad79a-75c5-46de-8d49-56d2f5bab086-scripts\") pod \"octavia-worker-8fcqd\" (UID: \"ca4ad79a-75c5-46de-8d49-56d2f5bab086\") " pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.275685 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:36 crc kubenswrapper[4780]: I0219 09:57:36.863727 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-8fcqd"] Feb 19 09:57:37 crc kubenswrapper[4780]: I0219 09:57:37.052115 4780 generic.go:334] "Generic (PLEG): container finished" podID="c7858366-1626-44dc-885d-78118fe2c43d" containerID="d5a169f0c06bb2fa3ac7efc529566b6112998e097b94e592939dbde5ede4e2e9" exitCode=0 Feb 19 09:57:37 crc kubenswrapper[4780]: I0219 09:57:37.052200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-q4924" event={"ID":"c7858366-1626-44dc-885d-78118fe2c43d","Type":"ContainerDied","Data":"d5a169f0c06bb2fa3ac7efc529566b6112998e097b94e592939dbde5ede4e2e9"} Feb 19 09:57:37 crc kubenswrapper[4780]: I0219 09:57:37.605688 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-q4924"] Feb 19 09:57:38 crc kubenswrapper[4780]: I0219 09:57:38.062409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9x46n" event={"ID":"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6","Type":"ContainerStarted","Data":"cba5482cea732c9cbfd83d51f40bb5c13895b5428b90a36a562744d2f749bfeb"} Feb 19 09:57:38 crc kubenswrapper[4780]: I0219 09:57:38.067835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-q4924" event={"ID":"c7858366-1626-44dc-885d-78118fe2c43d","Type":"ContainerStarted","Data":"6e3a1cd0b194eb18e438e259fb5bb67d03ee6aa06aafae9941ed8fded642f0ea"} Feb 19 09:57:38 crc kubenswrapper[4780]: I0219 09:57:38.068793 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:38 crc kubenswrapper[4780]: I0219 09:57:38.070428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8fcqd" event={"ID":"ca4ad79a-75c5-46de-8d49-56d2f5bab086","Type":"ContainerStarted","Data":"de067ba2d7b9532b58027739125a15d1263758daaa70abb5157a3ef54d50dfdd"} Feb 19 09:57:38 crc kubenswrapper[4780]: I0219 09:57:38.113269 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-q4924" podStartSLOduration=5.1132482 podStartE2EDuration="5.1132482s" podCreationTimestamp="2026-02-19 09:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:57:38.098694042 +0000 UTC m=+5800.842351501" watchObservedRunningTime="2026-02-19 09:57:38.1132482 +0000 UTC m=+5800.856905649" Feb 19 09:57:39 crc kubenswrapper[4780]: I0219 09:57:39.085458 4780 generic.go:334] "Generic (PLEG): container finished" podID="b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6" containerID="cba5482cea732c9cbfd83d51f40bb5c13895b5428b90a36a562744d2f749bfeb" exitCode=0 Feb 19 09:57:39 crc kubenswrapper[4780]: I0219 09:57:39.086297 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9x46n" event={"ID":"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6","Type":"ContainerDied","Data":"cba5482cea732c9cbfd83d51f40bb5c13895b5428b90a36a562744d2f749bfeb"} Feb 19 09:57:40 crc kubenswrapper[4780]: I0219 09:57:40.118632 4780 generic.go:334] "Generic (PLEG): container finished" podID="ca4ad79a-75c5-46de-8d49-56d2f5bab086" containerID="2eac9f086f7fed663ffa4acc7e34f4a636de321b0c3abf69457361a41e9e61c9" exitCode=0 Feb 19 09:57:40 crc kubenswrapper[4780]: I0219 09:57:40.118681 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8fcqd" event={"ID":"ca4ad79a-75c5-46de-8d49-56d2f5bab086","Type":"ContainerDied","Data":"2eac9f086f7fed663ffa4acc7e34f4a636de321b0c3abf69457361a41e9e61c9"} Feb 19 09:57:40 crc kubenswrapper[4780]: I0219 09:57:40.122570 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9x46n" event={"ID":"b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6","Type":"ContainerStarted","Data":"0aee08a58ae33e2da54544602b93d873410c1cbe33884e04eb6b761412cb6d53"} Feb 19 09:57:40 crc kubenswrapper[4780]: I0219 09:57:40.122616 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:40 crc kubenswrapper[4780]: I0219 09:57:40.202323 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-9x46n" podStartSLOduration=4.38658188 podStartE2EDuration="6.202299303s" podCreationTimestamp="2026-02-19 09:57:34 +0000 UTC" firstStartedPulling="2026-02-19 09:57:35.616098353 +0000 UTC m=+5798.359755802" lastFinishedPulling="2026-02-19 09:57:37.431815776 +0000 UTC m=+5800.175473225" observedRunningTime="2026-02-19 09:57:40.197886234 +0000 UTC m=+5802.941543683" watchObservedRunningTime="2026-02-19 09:57:40.202299303 +0000 UTC m=+5802.945956752" Feb 19 09:57:41 crc kubenswrapper[4780]: I0219 09:57:41.139192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8fcqd" event={"ID":"ca4ad79a-75c5-46de-8d49-56d2f5bab086","Type":"ContainerStarted","Data":"e2a9c4a4496f799ec825e8666571517e1ce23cdfe58d550096e50875f12501f8"} Feb 19 09:57:41 crc kubenswrapper[4780]: I0219 09:57:41.144837 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:41 crc kubenswrapper[4780]: I0219 09:57:41.166068 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-8fcqd" podStartSLOduration=4.859101915 podStartE2EDuration="6.166017717s" podCreationTimestamp="2026-02-19 09:57:35 +0000 UTC" firstStartedPulling="2026-02-19 09:57:37.299099963 +0000 UTC m=+5800.042757432" lastFinishedPulling="2026-02-19 09:57:38.606015785 +0000 UTC m=+5801.349673234" observedRunningTime="2026-02-19 09:57:41.163742281 +0000 UTC m=+5803.907399730" watchObservedRunningTime="2026-02-19 09:57:41.166017717 +0000 UTC m=+5803.909675166" Feb 19 09:57:48 crc kubenswrapper[4780]: I0219 09:57:48.475771 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-q4924" Feb 19 09:57:48 crc kubenswrapper[4780]: I0219 09:57:48.938646 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:57:48 crc kubenswrapper[4780]: E0219 09:57:48.939157 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:57:50 crc kubenswrapper[4780]: I0219 09:57:50.025544 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-9x46n" Feb 19 09:57:51 crc kubenswrapper[4780]: I0219 09:57:51.309310 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-8fcqd" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.207873 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.209802 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.218620 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tw28k" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.218712 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.219012 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.219216 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.231745 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v554b\" (UniqueName: \"kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.231803 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.231876 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.231994 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.232098 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.239458 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.250058 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.250432 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-httpd" containerID="cri-o://71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" gracePeriod=30 Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.250388 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-log" containerID="cri-o://3dda3e3be6e83812399203391e538a48d65323c59cdb7f64f6bc9ad745fc0bbd" gracePeriod=30 Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.300584 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.301033 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-log" containerID="cri-o://5a0e3983ef317dbfdfc3b88c762a3201bad158bb4a54fb8c70b9deeb2890293e" gracePeriod=30 Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.301208 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-httpd" containerID="cri-o://e186890e54710a2680669fb6aac5af9c2c432e4fb5f603f6eeddb1cedb154837" gracePeriod=30 Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.314017 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.316241 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.333665 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.333767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.333813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.333881 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.333972 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334033 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v554b\" (UniqueName: \"kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334190 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrrb\" (UniqueName: \"kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.334906 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.335055 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.335664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.336570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.356647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.363320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v554b\" (UniqueName: \"kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b\") pod \"horizon-6b46bfbddf-nv6cb\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.435579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.435627 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.435664 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrrb\" (UniqueName: \"kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.435739 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.435771 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.436153 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.436909 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.437325 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.439269 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.452075 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrrb\" (UniqueName: \"kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb\") pod \"horizon-7d46c78647-lkdr2\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.549561 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.636193 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:57:57 crc kubenswrapper[4780]: I0219 09:57:57.974976 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.007796 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.009690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.019614 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.057154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.057415 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.057566 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkvj\" (UniqueName: \"kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.057635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.057674 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.098690 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.162465 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.162505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.162581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkvj\" (UniqueName: \"kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.162624 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.162653 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.164608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.165168 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.166095 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.175925 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.192646 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkvj\" (UniqueName: \"kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj\") pod \"horizon-cb769fd5c-8qgdv\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.340168 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.341846 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.376590 4780 generic.go:334] "Generic (PLEG): container finished" podID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerID="3dda3e3be6e83812399203391e538a48d65323c59cdb7f64f6bc9ad745fc0bbd" exitCode=143 Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.376667 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerDied","Data":"3dda3e3be6e83812399203391e538a48d65323c59cdb7f64f6bc9ad745fc0bbd"} Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.396389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerStarted","Data":"f19ae9bfadd95425f2000b593aff12e8260ecee44bf4f01b89025a97c08056e6"} Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.399413 4780 generic.go:334] "Generic (PLEG): container finished" podID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerID="5a0e3983ef317dbfdfc3b88c762a3201bad158bb4a54fb8c70b9deeb2890293e" exitCode=143 Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.399443 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerDied","Data":"5a0e3983ef317dbfdfc3b88c762a3201bad158bb4a54fb8c70b9deeb2890293e"} Feb 19 09:57:58 crc kubenswrapper[4780]: I0219 09:57:58.851598 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.052352 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sb9tk"] Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.063414 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8813-account-create-update-8q5mj"] Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.074242 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sb9tk"] Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.082230 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8813-account-create-update-8q5mj"] Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.413924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerStarted","Data":"35f1b9e5a0cd8fb9b707bc909bd2ac8d337149e100e67d3e0bc5152342cc8d20"} Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.416674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerStarted","Data":"ec1e918a2621215285820e4c128582d26ed4a50d6d563dd706c5edf4d5be94f0"} Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.957327 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006cb814-2256-49c7-b617-c55e753dbc73" path="/var/lib/kubelet/pods/006cb814-2256-49c7-b617-c55e753dbc73/volumes" Feb 19 09:57:59 crc kubenswrapper[4780]: I0219 09:57:59.959009 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0955c1-e301-443e-a782-0755ce6f6399" path="/var/lib/kubelet/pods/3e0955c1-e301-443e-a782-0755ce6f6399/volumes" Feb 19 09:58:01 crc kubenswrapper[4780]: I0219 09:58:01.442155 4780 generic.go:334] "Generic (PLEG): container finished" podID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerID="e186890e54710a2680669fb6aac5af9c2c432e4fb5f603f6eeddb1cedb154837" exitCode=0 Feb 19 09:58:01 crc kubenswrapper[4780]: I0219 09:58:01.442332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerDied","Data":"e186890e54710a2680669fb6aac5af9c2c432e4fb5f603f6eeddb1cedb154837"} Feb 19 09:58:01 crc kubenswrapper[4780]: I0219 09:58:01.448926 4780 generic.go:334] "Generic (PLEG): container finished" podID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerID="71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" exitCode=0 Feb 19 09:58:01 crc kubenswrapper[4780]: I0219 09:58:01.448977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerDied","Data":"71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf"} Feb 19 09:58:03 crc kubenswrapper[4780]: I0219 09:58:03.938584 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:58:03 crc kubenswrapper[4780]: E0219 09:58:03.939192 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:58:05 crc kubenswrapper[4780]: I0219 09:58:05.039198 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tsv5n"] Feb 19 09:58:05 crc kubenswrapper[4780]: I0219 09:58:05.048145 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tsv5n"] Feb 19 09:58:05 crc kubenswrapper[4780]: I0219 09:58:05.949424 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58" path="/var/lib/kubelet/pods/f0f4cd7b-2dcd-4981-9bba-ec8e2b88db58/volumes" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.472795 4780 scope.go:117] "RemoveContainer" containerID="9ce7b324a24a61cec3fefd79864f2f6ed08a1250ff0579ef5c195f0e218f2d4b" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.525967 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.531864 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.551982 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c201488a-de5c-4ca4-8354-436bb1e687bf","Type":"ContainerDied","Data":"ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f"} Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.552025 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.552053 4780 scope.go:117] "RemoveContainer" containerID="71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.576634 4780 scope.go:117] "RemoveContainer" containerID="71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.576924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06c1a22f-c2bb-4b5d-8b20-8580a91cc533","Type":"ContainerDied","Data":"ca2e62f877049d8a122ad02aab47a7044cadffa2b8f6c925f9ad7882f1c7fa9f"} Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.576957 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608053 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608136 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608159 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608229 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h94md\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608360 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608384 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdgjn\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608399 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608464 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608516 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608550 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608592 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data\") pod \"c201488a-de5c-4ca4-8354-436bb1e687bf\" (UID: \"c201488a-de5c-4ca4-8354-436bb1e687bf\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.608612 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph\") pod \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\" (UID: \"06c1a22f-c2bb-4b5d-8b20-8580a91cc533\") " Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.615762 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn" (OuterVolumeSpecName: "kube-api-access-tdgjn") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "kube-api-access-tdgjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.616298 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.616982 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.621143 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs" (OuterVolumeSpecName: "logs") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.621978 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph" (OuterVolumeSpecName: "ceph") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.624733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs" (OuterVolumeSpecName: "logs") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.631650 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph" (OuterVolumeSpecName: "ceph") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.632304 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md" (OuterVolumeSpecName: "kube-api-access-h94md") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "kube-api-access-h94md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.639315 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts" (OuterVolumeSpecName: "scripts") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.657763 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts" (OuterVolumeSpecName: "scripts") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.697252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.697318 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710851 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h94md\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-kube-api-access-h94md\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710873 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710883 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710891 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdgjn\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-kube-api-access-tdgjn\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710901 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710910 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710917 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710924 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c201488a-de5c-4ca4-8354-436bb1e687bf-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710932 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710940 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710947 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.710955 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c201488a-de5c-4ca4-8354-436bb1e687bf-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.719553 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data" (OuterVolumeSpecName: "config-data") pod "c201488a-de5c-4ca4-8354-436bb1e687bf" (UID: "c201488a-de5c-4ca4-8354-436bb1e687bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.745005 4780 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_glance-httpd_glance-default-external-api-0_openstack_c201488a-de5c-4ca4-8354-436bb1e687bf_0 in pod sandbox ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f from index: no such id: '71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf'" containerID="71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.745075 4780 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_glance-httpd_glance-default-external-api-0_openstack_c201488a-de5c-4ca4-8354-436bb1e687bf_0 in pod sandbox ed6be821bee8be8a73ec56b4cce1a25cfec5c0cf6eae2b78bf63e2bb2c9b6b6f from index: no such id: '71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf'" containerID="71ecad70ecc4c821af2aafd2fa7c0510045b1abc576bccbc1fc54ceaf25509cf" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.745021 4780 scope.go:117] "RemoveContainer" containerID="3dda3e3be6e83812399203391e538a48d65323c59cdb7f64f6bc9ad745fc0bbd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.745324 4780 scope.go:117] "RemoveContainer" containerID="b6b8405c2b5b8f18202e418c033ca717976b8af494417653cb566d3708b75ad2" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.759063 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data" (OuterVolumeSpecName: "config-data") pod "06c1a22f-c2bb-4b5d-8b20-8580a91cc533" (UID: "06c1a22f-c2bb-4b5d-8b20-8580a91cc533"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.776556 4780 scope.go:117] "RemoveContainer" containerID="ed38482fe7bffa533fb880806206dbfd5522cf4c7c045570d5bc1866893d86d3" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.803839 4780 scope.go:117] "RemoveContainer" containerID="e186890e54710a2680669fb6aac5af9c2c432e4fb5f603f6eeddb1cedb154837" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.812969 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c201488a-de5c-4ca4-8354-436bb1e687bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.813010 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c1a22f-c2bb-4b5d-8b20-8580a91cc533-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.866808 4780 scope.go:117] "RemoveContainer" containerID="5a0e3983ef317dbfdfc3b88c762a3201bad158bb4a54fb8c70b9deeb2890293e" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.904895 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.919804 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.935205 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.952366 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.961594 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.962397 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962423 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.962447 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962455 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.962478 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962485 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: E0219 09:58:06.962496 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962503 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962733 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962766 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962779 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-log" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.962790 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" containerName="glance-httpd" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.964036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.970428 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.970666 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.970872 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b5hpl" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.971337 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.975721 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.981113 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.986476 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:58:06 crc kubenswrapper[4780]: I0219 09:58:06.995877 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhwx\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-kube-api-access-gjhwx\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018413 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018493 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018516 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018550 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018638 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2xg\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-kube-api-access-cj2xg\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018755 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-logs\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.018796 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.019034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-ceph\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.121266 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.121647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-logs\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122302 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-logs\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42b72844-a58f-497a-a9c3-0707e36e0bb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122430 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-ceph\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjhwx\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-kube-api-access-gjhwx\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122650 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122708 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122741 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.122770 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2xg\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-kube-api-access-cj2xg\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.124601 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.124890 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.126736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-ceph\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.128745 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.128905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.129411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.129450 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.130776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.133176 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.134230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b72844-a58f-497a-a9c3-0707e36e0bb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.150164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2xg\" (UniqueName: \"kubernetes.io/projected/42b72844-a58f-497a-a9c3-0707e36e0bb5-kube-api-access-cj2xg\") pod \"glance-default-external-api-0\" (UID: \"42b72844-a58f-497a-a9c3-0707e36e0bb5\") " pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.150693 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjhwx\" (UniqueName: \"kubernetes.io/projected/a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87-kube-api-access-gjhwx\") pod \"glance-default-internal-api-0\" (UID: \"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87\") " pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.293736 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.317463 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.597636 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerStarted","Data":"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.598697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerStarted","Data":"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.598342 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b46bfbddf-nv6cb" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon" containerID="cri-o://9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" gracePeriod=30 Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.598046 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b46bfbddf-nv6cb" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon-log" containerID="cri-o://464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" gracePeriod=30 Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.619969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerStarted","Data":"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.620381 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerStarted","Data":"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.629535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerStarted","Data":"56e3244ac18a16ffd001b55cb88acdff8205086fc6154c00b6f071968f4ccce7"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.629581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerStarted","Data":"639c1441b7cfde17504ed95a93d3f8dd0826b4ce220922192dc4a46c336765ff"} Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.637363 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.637421 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.640070 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b46bfbddf-nv6cb" podStartSLOduration=2.165193277 podStartE2EDuration="10.640048826s" podCreationTimestamp="2026-02-19 09:57:57 +0000 UTC" firstStartedPulling="2026-02-19 09:57:58.108446635 +0000 UTC m=+5820.852104094" lastFinishedPulling="2026-02-19 09:58:06.583302194 +0000 UTC m=+5829.326959643" observedRunningTime="2026-02-19 09:58:07.621587992 +0000 UTC m=+5830.365245441" watchObservedRunningTime="2026-02-19 09:58:07.640048826 +0000 UTC m=+5830.383706275" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.661346 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cb769fd5c-8qgdv" podStartSLOduration=3.027069288 podStartE2EDuration="10.661325329s" podCreationTimestamp="2026-02-19 09:57:57 +0000 UTC" firstStartedPulling="2026-02-19 09:57:58.866714549 +0000 UTC m=+5821.610371998" lastFinishedPulling="2026-02-19 09:58:06.50097059 +0000 UTC m=+5829.244628039" observedRunningTime="2026-02-19 09:58:07.642160167 +0000 UTC m=+5830.385817616" watchObservedRunningTime="2026-02-19 09:58:07.661325329 +0000 UTC m=+5830.404982778" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.681937 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d46c78647-lkdr2" podStartSLOduration=2.583809849 podStartE2EDuration="10.681913485s" podCreationTimestamp="2026-02-19 09:57:57 +0000 UTC" firstStartedPulling="2026-02-19 09:57:58.364221324 +0000 UTC m=+5821.107878773" lastFinishedPulling="2026-02-19 09:58:06.46232495 +0000 UTC m=+5829.205982409" observedRunningTime="2026-02-19 09:58:07.66707773 +0000 UTC m=+5830.410735179" watchObservedRunningTime="2026-02-19 09:58:07.681913485 +0000 UTC m=+5830.425570934" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.953103 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c1a22f-c2bb-4b5d-8b20-8580a91cc533" path="/var/lib/kubelet/pods/06c1a22f-c2bb-4b5d-8b20-8580a91cc533/volumes" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.953950 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c201488a-de5c-4ca4-8354-436bb1e687bf" path="/var/lib/kubelet/pods/c201488a-de5c-4ca4-8354-436bb1e687bf/volumes" Feb 19 09:58:07 crc kubenswrapper[4780]: I0219 09:58:07.971037 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 09:58:08 crc kubenswrapper[4780]: I0219 09:58:08.175466 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 09:58:08 crc kubenswrapper[4780]: W0219 09:58:08.189302 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6dc9c0b_c8ab_4695_b792_3dd0ddc1bd87.slice/crio-3115d02d48d2ba17c2485571a081f323c1375bf9e4e2ef73c4dbf6ee470fcf92 WatchSource:0}: Error finding container 3115d02d48d2ba17c2485571a081f323c1375bf9e4e2ef73c4dbf6ee470fcf92: Status 404 returned error can't find the container with id 3115d02d48d2ba17c2485571a081f323c1375bf9e4e2ef73c4dbf6ee470fcf92 Feb 19 09:58:08 crc kubenswrapper[4780]: I0219 09:58:08.341343 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:58:08 crc kubenswrapper[4780]: I0219 09:58:08.341395 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:58:08 crc kubenswrapper[4780]: I0219 09:58:08.645698 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87","Type":"ContainerStarted","Data":"3115d02d48d2ba17c2485571a081f323c1375bf9e4e2ef73c4dbf6ee470fcf92"} Feb 19 09:58:08 crc kubenswrapper[4780]: I0219 09:58:08.647392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42b72844-a58f-497a-a9c3-0707e36e0bb5","Type":"ContainerStarted","Data":"fdcc9c44c03316cc668051a81790a79e1160c93569cc765dc6fc20bc2de39e55"} Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.659353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87","Type":"ContainerStarted","Data":"5e7ab60acd97aa6b382b43dec623fb08541ee9aa7bf910d229db6b90872e1558"} Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.659394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87","Type":"ContainerStarted","Data":"0460a9b277fbb7057967df262f3a7a93b1e9c8956672def76fc17e72d91f6db6"} Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.662805 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42b72844-a58f-497a-a9c3-0707e36e0bb5","Type":"ContainerStarted","Data":"3e8fecd11a12932a1cc53fb51599980f881671506316781e42063100314274a9"} Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.663204 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42b72844-a58f-497a-a9c3-0707e36e0bb5","Type":"ContainerStarted","Data":"ab8f0c101eaa838c9bd7a95387c503a3ae89f634c6821af93e817bdb770429c4"} Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.681688 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.681669462 podStartE2EDuration="3.681669462s" podCreationTimestamp="2026-02-19 09:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:58:09.676344992 +0000 UTC m=+5832.420002461" watchObservedRunningTime="2026-02-19 09:58:09.681669462 +0000 UTC m=+5832.425326911" Feb 19 09:58:09 crc kubenswrapper[4780]: I0219 09:58:09.707534 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.707512238 podStartE2EDuration="3.707512238s" podCreationTimestamp="2026-02-19 09:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:58:09.697238605 +0000 UTC m=+5832.440896064" watchObservedRunningTime="2026-02-19 09:58:09.707512238 +0000 UTC m=+5832.451169687" Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.844295 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.850585 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.864757 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.928393 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5pq\" (UniqueName: \"kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.928482 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:11 crc kubenswrapper[4780]: I0219 09:58:11.928601 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.030775 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.030987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.031095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5pq\" (UniqueName: \"kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.031451 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.031898 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.057395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5pq\" (UniqueName: \"kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq\") pod \"community-operators-hvfqf\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.172692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:12 crc kubenswrapper[4780]: I0219 09:58:12.766081 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:13 crc kubenswrapper[4780]: I0219 09:58:13.730940 4780 generic.go:334] "Generic (PLEG): container finished" podID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerID="43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0" exitCode=0 Feb 19 09:58:13 crc kubenswrapper[4780]: I0219 09:58:13.731024 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerDied","Data":"43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0"} Feb 19 09:58:13 crc kubenswrapper[4780]: I0219 09:58:13.731392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerStarted","Data":"21858320e399c68d80a3890054344154261c5a534e394499568113492a51d7f8"} Feb 19 09:58:14 crc kubenswrapper[4780]: I0219 09:58:14.764995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerStarted","Data":"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324"} Feb 19 09:58:15 crc kubenswrapper[4780]: I0219 09:58:15.824186 4780 generic.go:334] "Generic (PLEG): container finished" podID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerID="0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324" exitCode=0 Feb 19 09:58:15 crc kubenswrapper[4780]: I0219 09:58:15.825540 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerDied","Data":"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324"} Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.294395 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.297744 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.318937 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.318998 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.359171 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.360809 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.378795 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.396429 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.550092 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.639748 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.849012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerStarted","Data":"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0"} Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.850067 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.850246 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.850331 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.850407 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:17 crc kubenswrapper[4780]: I0219 09:58:17.892876 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvfqf" podStartSLOduration=3.10227705 podStartE2EDuration="6.892844908s" podCreationTimestamp="2026-02-19 09:58:11 +0000 UTC" firstStartedPulling="2026-02-19 09:58:13.735220176 +0000 UTC m=+5836.478877625" lastFinishedPulling="2026-02-19 09:58:17.525788024 +0000 UTC m=+5840.269445483" observedRunningTime="2026-02-19 09:58:17.869056793 +0000 UTC m=+5840.612714242" watchObservedRunningTime="2026-02-19 09:58:17.892844908 +0000 UTC m=+5840.636502357" Feb 19 09:58:18 crc kubenswrapper[4780]: I0219 09:58:18.343191 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Feb 19 09:58:18 crc kubenswrapper[4780]: I0219 09:58:18.939958 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:58:18 crc kubenswrapper[4780]: E0219 09:58:18.940408 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:58:19 crc kubenswrapper[4780]: I0219 09:58:19.872315 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:58:19 crc kubenswrapper[4780]: I0219 09:58:19.872641 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:58:20 crc kubenswrapper[4780]: I0219 09:58:20.129380 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:20 crc kubenswrapper[4780]: I0219 09:58:20.172053 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 09:58:20 crc kubenswrapper[4780]: I0219 09:58:20.347145 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 09:58:20 crc kubenswrapper[4780]: I0219 09:58:20.347933 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:58:20 crc kubenswrapper[4780]: I0219 09:58:20.351821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 09:58:22 crc kubenswrapper[4780]: I0219 09:58:22.173471 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:22 crc kubenswrapper[4780]: I0219 09:58:22.174149 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:23 crc kubenswrapper[4780]: I0219 09:58:23.254755 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hvfqf" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="registry-server" probeResult="failure" output=< Feb 19 09:58:23 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 09:58:23 crc kubenswrapper[4780]: > Feb 19 09:58:29 crc kubenswrapper[4780]: I0219 09:58:29.753739 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:58:29 crc kubenswrapper[4780]: I0219 09:58:29.943177 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:58:29 crc kubenswrapper[4780]: E0219 09:58:29.943381 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:58:30 crc kubenswrapper[4780]: I0219 09:58:30.281720 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:58:31 crc kubenswrapper[4780]: I0219 09:58:31.498538 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:58:31 crc kubenswrapper[4780]: I0219 09:58:31.882926 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.076235 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.076549 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon-log" containerID="cri-o://639c1441b7cfde17504ed95a93d3f8dd0826b4ce220922192dc4a46c336765ff" gracePeriod=30 Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.076818 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" containerID="cri-o://56e3244ac18a16ffd001b55cb88acdff8205086fc6154c00b6f071968f4ccce7" gracePeriod=30 Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.266486 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.342106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:32 crc kubenswrapper[4780]: I0219 09:58:32.515588 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.046809 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1caa-account-create-update-wwwpz"] Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.056273 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9fmpp"] Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.067638 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9fmpp"] Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.076572 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1caa-account-create-update-wwwpz"] Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.100486 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvfqf" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="registry-server" containerID="cri-o://628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0" gracePeriod=2 Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.727099 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.895225 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content\") pod \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.895731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities\") pod \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.895814 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5pq\" (UniqueName: \"kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq\") pod \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\" (UID: \"0acddbc9-5c31-444f-b0a5-0576c31fa7b3\") " Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.896553 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities" (OuterVolumeSpecName: "utilities") pod "0acddbc9-5c31-444f-b0a5-0576c31fa7b3" (UID: "0acddbc9-5c31-444f-b0a5-0576c31fa7b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.896921 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.902246 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq" (OuterVolumeSpecName: "kube-api-access-dk5pq") pod "0acddbc9-5c31-444f-b0a5-0576c31fa7b3" (UID: "0acddbc9-5c31-444f-b0a5-0576c31fa7b3"). InnerVolumeSpecName "kube-api-access-dk5pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.952079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0acddbc9-5c31-444f-b0a5-0576c31fa7b3" (UID: "0acddbc9-5c31-444f-b0a5-0576c31fa7b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.999336 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:34 crc kubenswrapper[4780]: I0219 09:58:34.999374 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5pq\" (UniqueName: \"kubernetes.io/projected/0acddbc9-5c31-444f-b0a5-0576c31fa7b3-kube-api-access-dk5pq\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.112261 4780 generic.go:334] "Generic (PLEG): container finished" podID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerID="628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0" exitCode=0 Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.112404 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerDied","Data":"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0"} Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.112464 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfqf" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.113365 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfqf" event={"ID":"0acddbc9-5c31-444f-b0a5-0576c31fa7b3","Type":"ContainerDied","Data":"21858320e399c68d80a3890054344154261c5a534e394499568113492a51d7f8"} Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.113381 4780 scope.go:117] "RemoveContainer" containerID="628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.147476 4780 scope.go:117] "RemoveContainer" containerID="0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.169608 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.185141 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvfqf"] Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.191330 4780 scope.go:117] "RemoveContainer" containerID="43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.242516 4780 scope.go:117] "RemoveContainer" containerID="628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0" Feb 19 09:58:35 crc kubenswrapper[4780]: E0219 09:58:35.243195 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0\": container with ID starting with 628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0 not found: ID does not exist" containerID="628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.243275 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0"} err="failed to get container status \"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0\": rpc error: code = NotFound desc = could not find container \"628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0\": container with ID starting with 628404e6231af98b473fc15f47a24cdf1be2a4c13694e2f7c7279c54af3251e0 not found: ID does not exist" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.243331 4780 scope.go:117] "RemoveContainer" containerID="0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324" Feb 19 09:58:35 crc kubenswrapper[4780]: E0219 09:58:35.243815 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324\": container with ID starting with 0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324 not found: ID does not exist" containerID="0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.243869 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324"} err="failed to get container status \"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324\": rpc error: code = NotFound desc = could not find container \"0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324\": container with ID starting with 0878a332430ed490bc7ba7682bc1beec4de38755717bc263766b808f00301324 not found: ID does not exist" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.243906 4780 scope.go:117] "RemoveContainer" containerID="43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0" Feb 19 09:58:35 crc kubenswrapper[4780]: E0219 09:58:35.244462 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0\": container with ID starting with 43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0 not found: ID does not exist" containerID="43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.244525 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0"} err="failed to get container status \"43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0\": rpc error: code = NotFound desc = could not find container \"43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0\": container with ID starting with 43655868c77722e3b962fbd505c0c10912a926466c850a73faae09efb776c8f0 not found: ID does not exist" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.965654 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" path="/var/lib/kubelet/pods/0acddbc9-5c31-444f-b0a5-0576c31fa7b3/volumes" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.968006 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2855a35c-55c0-4a23-bac0-98c18c0ce711" path="/var/lib/kubelet/pods/2855a35c-55c0-4a23-bac0-98c18c0ce711/volumes" Feb 19 09:58:35 crc kubenswrapper[4780]: I0219 09:58:35.970164 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e1e1cc-2317-4d00-a0e4-c9b9ce697969" path="/var/lib/kubelet/pods/95e1e1cc-2317-4d00-a0e4-c9b9ce697969/volumes" Feb 19 09:58:36 crc kubenswrapper[4780]: I0219 09:58:36.129899 4780 generic.go:334] "Generic (PLEG): container finished" podID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerID="56e3244ac18a16ffd001b55cb88acdff8205086fc6154c00b6f071968f4ccce7" exitCode=0 Feb 19 09:58:36 crc kubenswrapper[4780]: I0219 09:58:36.129982 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerDied","Data":"56e3244ac18a16ffd001b55cb88acdff8205086fc6154c00b6f071968f4ccce7"} Feb 19 09:58:37 crc kubenswrapper[4780]: I0219 09:58:37.637360 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.114812 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.150438 4780 generic.go:334] "Generic (PLEG): container finished" podID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerID="9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" exitCode=137 Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.150782 4780 generic.go:334] "Generic (PLEG): container finished" podID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerID="464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" exitCode=137 Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.150695 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerDied","Data":"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00"} Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.150981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerDied","Data":"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482"} Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.151075 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b46bfbddf-nv6cb" event={"ID":"846e1f39-0ba7-45ee-bfff-ae20e691cae1","Type":"ContainerDied","Data":"f19ae9bfadd95425f2000b593aff12e8260ecee44bf4f01b89025a97c08056e6"} Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.151181 4780 scope.go:117] "RemoveContainer" containerID="9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.150672 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b46bfbddf-nv6cb" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.287322 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data\") pod \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.287428 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key\") pod \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.287466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs\") pod \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.287577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts\") pod \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.287619 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v554b\" (UniqueName: \"kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b\") pod \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\" (UID: \"846e1f39-0ba7-45ee-bfff-ae20e691cae1\") " Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.290516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs" (OuterVolumeSpecName: "logs") pod "846e1f39-0ba7-45ee-bfff-ae20e691cae1" (UID: "846e1f39-0ba7-45ee-bfff-ae20e691cae1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.303518 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b" (OuterVolumeSpecName: "kube-api-access-v554b") pod "846e1f39-0ba7-45ee-bfff-ae20e691cae1" (UID: "846e1f39-0ba7-45ee-bfff-ae20e691cae1"). InnerVolumeSpecName "kube-api-access-v554b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.309076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "846e1f39-0ba7-45ee-bfff-ae20e691cae1" (UID: "846e1f39-0ba7-45ee-bfff-ae20e691cae1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.316778 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts" (OuterVolumeSpecName: "scripts") pod "846e1f39-0ba7-45ee-bfff-ae20e691cae1" (UID: "846e1f39-0ba7-45ee-bfff-ae20e691cae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.321973 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data" (OuterVolumeSpecName: "config-data") pod "846e1f39-0ba7-45ee-bfff-ae20e691cae1" (UID: "846e1f39-0ba7-45ee-bfff-ae20e691cae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.353635 4780 scope.go:117] "RemoveContainer" containerID="464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.378361 4780 scope.go:117] "RemoveContainer" containerID="9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" Feb 19 09:58:38 crc kubenswrapper[4780]: E0219 09:58:38.378762 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00\": container with ID starting with 9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00 not found: ID does not exist" containerID="9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.378814 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00"} err="failed to get container status \"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00\": rpc error: code = NotFound desc = could not find container \"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00\": container with ID starting with 9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00 not found: ID does not exist" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.378843 4780 scope.go:117] "RemoveContainer" containerID="464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" Feb 19 09:58:38 crc kubenswrapper[4780]: E0219 09:58:38.379089 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482\": container with ID starting with 464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482 not found: ID does not exist" containerID="464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.379107 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482"} err="failed to get container status \"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482\": rpc error: code = NotFound desc = could not find container \"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482\": container with ID starting with 464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482 not found: ID does not exist" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.379130 4780 scope.go:117] "RemoveContainer" containerID="9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.379312 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00"} err="failed to get container status \"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00\": rpc error: code = NotFound desc = could not find container \"9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00\": container with ID starting with 9c1753f9aafb1e0e81c1d90435b8f56a03709c803c74796053ec553e62c2ae00 not found: ID does not exist" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.379329 4780 scope.go:117] "RemoveContainer" containerID="464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.379467 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482"} err="failed to get container status \"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482\": rpc error: code = NotFound desc = could not find container \"464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482\": container with ID starting with 464ff7ebe4b9795f0ffd56446834ff5e5d7b600f944861ec1293499028495482 not found: ID does not exist" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.389986 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/846e1f39-0ba7-45ee-bfff-ae20e691cae1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.390016 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/846e1f39-0ba7-45ee-bfff-ae20e691cae1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.390025 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.390035 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v554b\" (UniqueName: \"kubernetes.io/projected/846e1f39-0ba7-45ee-bfff-ae20e691cae1-kube-api-access-v554b\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.390045 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846e1f39-0ba7-45ee-bfff-ae20e691cae1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.524432 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:58:38 crc kubenswrapper[4780]: I0219 09:58:38.537034 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b46bfbddf-nv6cb"] Feb 19 09:58:39 crc kubenswrapper[4780]: I0219 09:58:39.957273 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" path="/var/lib/kubelet/pods/846e1f39-0ba7-45ee-bfff-ae20e691cae1/volumes" Feb 19 09:58:40 crc kubenswrapper[4780]: I0219 09:58:40.939057 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:58:40 crc kubenswrapper[4780]: E0219 09:58:40.939875 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:58:44 crc kubenswrapper[4780]: I0219 09:58:44.056937 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pwhdh"] Feb 19 09:58:44 crc kubenswrapper[4780]: I0219 09:58:44.067633 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pwhdh"] Feb 19 09:58:45 crc kubenswrapper[4780]: I0219 09:58:45.954513 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b19ede-98bd-4bd7-9f80-060def069830" path="/var/lib/kubelet/pods/f3b19ede-98bd-4bd7-9f80-060def069830/volumes" Feb 19 09:58:47 crc kubenswrapper[4780]: I0219 09:58:47.637798 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Feb 19 09:58:54 crc kubenswrapper[4780]: I0219 09:58:54.938403 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:58:54 crc kubenswrapper[4780]: E0219 09:58:54.939117 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:58:57 crc kubenswrapper[4780]: I0219 09:58:57.638177 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d46c78647-lkdr2" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Feb 19 09:58:57 crc kubenswrapper[4780]: I0219 09:58:57.638634 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.419578 4780 generic.go:334] "Generic (PLEG): container finished" podID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerID="639c1441b7cfde17504ed95a93d3f8dd0826b4ce220922192dc4a46c336765ff" exitCode=137 Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.419657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerDied","Data":"639c1441b7cfde17504ed95a93d3f8dd0826b4ce220922192dc4a46c336765ff"} Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.557489 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.703022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrrb\" (UniqueName: \"kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb\") pod \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.703196 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key\") pod \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.703238 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts\") pod \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.703296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data\") pod \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.703335 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs\") pod \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\" (UID: \"14a790a6-a0c6-41c6-8471-8d484b5d5b6c\") " Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.704183 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs" (OuterVolumeSpecName: "logs") pod "14a790a6-a0c6-41c6-8471-8d484b5d5b6c" (UID: "14a790a6-a0c6-41c6-8471-8d484b5d5b6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.711276 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb" (OuterVolumeSpecName: "kube-api-access-dxrrb") pod "14a790a6-a0c6-41c6-8471-8d484b5d5b6c" (UID: "14a790a6-a0c6-41c6-8471-8d484b5d5b6c"). InnerVolumeSpecName "kube-api-access-dxrrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.711698 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "14a790a6-a0c6-41c6-8471-8d484b5d5b6c" (UID: "14a790a6-a0c6-41c6-8471-8d484b5d5b6c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.733040 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts" (OuterVolumeSpecName: "scripts") pod "14a790a6-a0c6-41c6-8471-8d484b5d5b6c" (UID: "14a790a6-a0c6-41c6-8471-8d484b5d5b6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.738159 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data" (OuterVolumeSpecName: "config-data") pod "14a790a6-a0c6-41c6-8471-8d484b5d5b6c" (UID: "14a790a6-a0c6-41c6-8471-8d484b5d5b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.805632 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrrb\" (UniqueName: \"kubernetes.io/projected/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-kube-api-access-dxrrb\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.805680 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.805695 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.805706 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:02 crc kubenswrapper[4780]: I0219 09:59:02.805721 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14a790a6-a0c6-41c6-8471-8d484b5d5b6c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.453543 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d46c78647-lkdr2" event={"ID":"14a790a6-a0c6-41c6-8471-8d484b5d5b6c","Type":"ContainerDied","Data":"35f1b9e5a0cd8fb9b707bc909bd2ac8d337149e100e67d3e0bc5152342cc8d20"} Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.453628 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d46c78647-lkdr2" Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.454084 4780 scope.go:117] "RemoveContainer" containerID="56e3244ac18a16ffd001b55cb88acdff8205086fc6154c00b6f071968f4ccce7" Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.512048 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.528578 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d46c78647-lkdr2"] Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.694692 4780 scope.go:117] "RemoveContainer" containerID="639c1441b7cfde17504ed95a93d3f8dd0826b4ce220922192dc4a46c336765ff" Feb 19 09:59:03 crc kubenswrapper[4780]: I0219 09:59:03.958550 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" path="/var/lib/kubelet/pods/14a790a6-a0c6-41c6-8471-8d484b5d5b6c/volumes" Feb 19 09:59:06 crc kubenswrapper[4780]: I0219 09:59:06.939284 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:59:06 crc kubenswrapper[4780]: E0219 09:59:06.940252 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:59:07 crc kubenswrapper[4780]: I0219 09:59:07.101804 4780 scope.go:117] "RemoveContainer" containerID="b12850f7509ef0cb80cf15498a032d5b3a7fcc82477ef416958ed373d432b037" Feb 19 09:59:07 crc kubenswrapper[4780]: I0219 09:59:07.168608 4780 scope.go:117] "RemoveContainer" containerID="799f74d95d5991913652ea81021ab96507f07c10baac54021134fce9c580d343" Feb 19 09:59:07 crc kubenswrapper[4780]: I0219 09:59:07.215727 4780 scope.go:117] "RemoveContainer" containerID="18841dfce0c18a3f9f096a4ce4b6f06cb53b8cab457ba1b4376e84e2bcc66f6f" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.134838 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cf49b6979-xt4hk"] Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140016 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140049 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140087 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140096 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140112 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140118 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140169 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140176 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140192 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="extract-content" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140199 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="extract-content" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140207 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="registry-server" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140222 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="registry-server" Feb 19 09:59:15 crc kubenswrapper[4780]: E0219 09:59:15.140232 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="extract-utilities" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140240 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="extract-utilities" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140472 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140494 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon-log" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140509 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acddbc9-5c31-444f-b0a5-0576c31fa7b3" containerName="registry-server" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140518 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="846e1f39-0ba7-45ee-bfff-ae20e691cae1" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.140530 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a790a6-a0c6-41c6-8471-8d484b5d5b6c" containerName="horizon" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.142039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.147627 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cf49b6979-xt4hk"] Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.286187 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0938c1-1dba-442f-ba05-e445bb201c42-logs\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.286787 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-scripts\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.286854 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb0938c1-1dba-442f-ba05-e445bb201c42-horizon-secret-key\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.286925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlv2z\" (UniqueName: \"kubernetes.io/projected/bb0938c1-1dba-442f-ba05-e445bb201c42-kube-api-access-vlv2z\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.287021 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-config-data\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.388920 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlv2z\" (UniqueName: \"kubernetes.io/projected/bb0938c1-1dba-442f-ba05-e445bb201c42-kube-api-access-vlv2z\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.389042 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-config-data\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.389103 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0938c1-1dba-442f-ba05-e445bb201c42-logs\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.389206 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-scripts\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.389251 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb0938c1-1dba-442f-ba05-e445bb201c42-horizon-secret-key\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.390432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0938c1-1dba-442f-ba05-e445bb201c42-logs\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.391829 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-scripts\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.393143 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb0938c1-1dba-442f-ba05-e445bb201c42-config-data\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.403712 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb0938c1-1dba-442f-ba05-e445bb201c42-horizon-secret-key\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.422471 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlv2z\" (UniqueName: \"kubernetes.io/projected/bb0938c1-1dba-442f-ba05-e445bb201c42-kube-api-access-vlv2z\") pod \"horizon-7cf49b6979-xt4hk\" (UID: \"bb0938c1-1dba-442f-ba05-e445bb201c42\") " pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:15 crc kubenswrapper[4780]: I0219 09:59:15.471655 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.007736 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cf49b6979-xt4hk"] Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.595600 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cf49b6979-xt4hk" event={"ID":"bb0938c1-1dba-442f-ba05-e445bb201c42","Type":"ContainerStarted","Data":"d89fa71e6c3c9761ce428014d2cf3bf7a1af106b36d2d49b5c1517812def1472"} Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.596099 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cf49b6979-xt4hk" event={"ID":"bb0938c1-1dba-442f-ba05-e445bb201c42","Type":"ContainerStarted","Data":"80a4fae95b9b43a50476bffc553c02c20d1731ad7e3619148ffeb31f1a67147c"} Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.596114 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cf49b6979-xt4hk" event={"ID":"bb0938c1-1dba-442f-ba05-e445bb201c42","Type":"ContainerStarted","Data":"c8029f1f9935ed103671aaf18d32d07ee1456e2ef28f098484df255df3eebba6"} Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.625710 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-fccp7"] Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.627498 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.631946 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cf49b6979-xt4hk" podStartSLOduration=1.631920208 podStartE2EDuration="1.631920208s" podCreationTimestamp="2026-02-19 09:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:59:16.625675384 +0000 UTC m=+5899.369332843" watchObservedRunningTime="2026-02-19 09:59:16.631920208 +0000 UTC m=+5899.375577657" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.643727 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fccp7"] Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.725079 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-7c52-account-create-update-2hnl9"] Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.726347 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.732573 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.745767 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7c52-account-create-update-2hnl9"] Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.819549 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw82d\" (UniqueName: \"kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.819607 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.921991 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw82d\" (UniqueName: \"kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.922052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.922085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztf7\" (UniqueName: \"kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.922302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.923071 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.948947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw82d\" (UniqueName: \"kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d\") pod \"heat-db-create-fccp7\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " pod="openstack/heat-db-create-fccp7" Feb 19 09:59:16 crc kubenswrapper[4780]: I0219 09:59:16.949749 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fccp7" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.024322 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztf7\" (UniqueName: \"kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.024723 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.025740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.053101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztf7\" (UniqueName: \"kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7\") pod \"heat-7c52-account-create-update-2hnl9\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.353094 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.435243 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fccp7"] Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.607951 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fccp7" event={"ID":"27352a2d-b7f7-4056-9ccb-b9947c758e3c","Type":"ContainerStarted","Data":"e53d070b684ead626257165216497008f6e629353bace4f6d22f8a7cceb128d8"} Feb 19 09:59:17 crc kubenswrapper[4780]: I0219 09:59:17.897998 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7c52-account-create-update-2hnl9"] Feb 19 09:59:18 crc kubenswrapper[4780]: I0219 09:59:18.621698 4780 generic.go:334] "Generic (PLEG): container finished" podID="27352a2d-b7f7-4056-9ccb-b9947c758e3c" containerID="f3db4332ef506edb9d029ab778420de28631dd77c8c719f642de17d8db5358e0" exitCode=0 Feb 19 09:59:18 crc kubenswrapper[4780]: I0219 09:59:18.621763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fccp7" event={"ID":"27352a2d-b7f7-4056-9ccb-b9947c758e3c","Type":"ContainerDied","Data":"f3db4332ef506edb9d029ab778420de28631dd77c8c719f642de17d8db5358e0"} Feb 19 09:59:18 crc kubenswrapper[4780]: I0219 09:59:18.624234 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7c52-account-create-update-2hnl9" event={"ID":"614517db-1826-4ac5-baaf-b1348e466574","Type":"ContainerStarted","Data":"ba2d0ea12088847876cedb52fa3deb0d8d3a8e9131bffcd9d4bfafaad5983e9e"} Feb 19 09:59:18 crc kubenswrapper[4780]: I0219 09:59:18.624308 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7c52-account-create-update-2hnl9" event={"ID":"614517db-1826-4ac5-baaf-b1348e466574","Type":"ContainerStarted","Data":"b67ea34ecda909f19dcf2e2a4fa390c916a774e7a3ca4a63c259f35ac14d4aad"} Feb 19 09:59:18 crc kubenswrapper[4780]: I0219 09:59:18.666258 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-7c52-account-create-update-2hnl9" podStartSLOduration=2.666237055 podStartE2EDuration="2.666237055s" podCreationTimestamp="2026-02-19 09:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:59:18.658460604 +0000 UTC m=+5901.402118063" watchObservedRunningTime="2026-02-19 09:59:18.666237055 +0000 UTC m=+5901.409894504" Feb 19 09:59:19 crc kubenswrapper[4780]: I0219 09:59:19.638997 4780 generic.go:334] "Generic (PLEG): container finished" podID="614517db-1826-4ac5-baaf-b1348e466574" containerID="ba2d0ea12088847876cedb52fa3deb0d8d3a8e9131bffcd9d4bfafaad5983e9e" exitCode=0 Feb 19 09:59:19 crc kubenswrapper[4780]: I0219 09:59:19.639094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7c52-account-create-update-2hnl9" event={"ID":"614517db-1826-4ac5-baaf-b1348e466574","Type":"ContainerDied","Data":"ba2d0ea12088847876cedb52fa3deb0d8d3a8e9131bffcd9d4bfafaad5983e9e"} Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.148285 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fccp7" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.226739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts\") pod \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.227159 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw82d\" (UniqueName: \"kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d\") pod \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\" (UID: \"27352a2d-b7f7-4056-9ccb-b9947c758e3c\") " Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.227452 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27352a2d-b7f7-4056-9ccb-b9947c758e3c" (UID: "27352a2d-b7f7-4056-9ccb-b9947c758e3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.227882 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27352a2d-b7f7-4056-9ccb-b9947c758e3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.234005 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d" (OuterVolumeSpecName: "kube-api-access-tw82d") pod "27352a2d-b7f7-4056-9ccb-b9947c758e3c" (UID: "27352a2d-b7f7-4056-9ccb-b9947c758e3c"). InnerVolumeSpecName "kube-api-access-tw82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.329865 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw82d\" (UniqueName: \"kubernetes.io/projected/27352a2d-b7f7-4056-9ccb-b9947c758e3c-kube-api-access-tw82d\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.652065 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fccp7" event={"ID":"27352a2d-b7f7-4056-9ccb-b9947c758e3c","Type":"ContainerDied","Data":"e53d070b684ead626257165216497008f6e629353bace4f6d22f8a7cceb128d8"} Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.652091 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fccp7" Feb 19 09:59:20 crc kubenswrapper[4780]: I0219 09:59:20.652134 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53d070b684ead626257165216497008f6e629353bace4f6d22f8a7cceb128d8" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.087749 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.145295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mztf7\" (UniqueName: \"kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7\") pod \"614517db-1826-4ac5-baaf-b1348e466574\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.145677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts\") pod \"614517db-1826-4ac5-baaf-b1348e466574\" (UID: \"614517db-1826-4ac5-baaf-b1348e466574\") " Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.146501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "614517db-1826-4ac5-baaf-b1348e466574" (UID: "614517db-1826-4ac5-baaf-b1348e466574"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.147064 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/614517db-1826-4ac5-baaf-b1348e466574-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.155394 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7" (OuterVolumeSpecName: "kube-api-access-mztf7") pod "614517db-1826-4ac5-baaf-b1348e466574" (UID: "614517db-1826-4ac5-baaf-b1348e466574"). InnerVolumeSpecName "kube-api-access-mztf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.249587 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mztf7\" (UniqueName: \"kubernetes.io/projected/614517db-1826-4ac5-baaf-b1348e466574-kube-api-access-mztf7\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.668759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7c52-account-create-update-2hnl9" event={"ID":"614517db-1826-4ac5-baaf-b1348e466574","Type":"ContainerDied","Data":"b67ea34ecda909f19dcf2e2a4fa390c916a774e7a3ca4a63c259f35ac14d4aad"} Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.668807 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67ea34ecda909f19dcf2e2a4fa390c916a774e7a3ca4a63c259f35ac14d4aad" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.668806 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7c52-account-create-update-2hnl9" Feb 19 09:59:21 crc kubenswrapper[4780]: I0219 09:59:21.940332 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:59:21 crc kubenswrapper[4780]: E0219 09:59:21.940674 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:59:25 crc kubenswrapper[4780]: I0219 09:59:25.472713 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:25 crc kubenswrapper[4780]: I0219 09:59:25.473229 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.053989 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2kmv2"] Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.066853 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-054e-account-create-update-vcjsc"] Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.077614 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2kmv2"] Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.086751 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-054e-account-create-update-vcjsc"] Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.842214 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zmgww"] Feb 19 09:59:26 crc kubenswrapper[4780]: E0219 09:59:26.842656 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614517db-1826-4ac5-baaf-b1348e466574" containerName="mariadb-account-create-update" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.842674 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="614517db-1826-4ac5-baaf-b1348e466574" containerName="mariadb-account-create-update" Feb 19 09:59:26 crc kubenswrapper[4780]: E0219 09:59:26.842689 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27352a2d-b7f7-4056-9ccb-b9947c758e3c" containerName="mariadb-database-create" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.842697 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="27352a2d-b7f7-4056-9ccb-b9947c758e3c" containerName="mariadb-database-create" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.842944 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="27352a2d-b7f7-4056-9ccb-b9947c758e3c" containerName="mariadb-database-create" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.842957 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="614517db-1826-4ac5-baaf-b1348e466574" containerName="mariadb-account-create-update" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.843755 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.849204 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nwzqt" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.857671 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.861956 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zmgww"] Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.901751 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.902242 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:26 crc kubenswrapper[4780]: I0219 09:59:26.902285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxkxw\" (UniqueName: \"kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.004204 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.004292 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxkxw\" (UniqueName: \"kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.004436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.016518 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.020016 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.029283 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxkxw\" (UniqueName: \"kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw\") pod \"heat-db-sync-zmgww\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.172590 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.732512 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zmgww"] Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.756371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zmgww" event={"ID":"49d6b6ef-a763-4a2e-ba5b-844d3095ca19","Type":"ContainerStarted","Data":"fa15fccb462ef3d89b2aee85da9cbb9da4de857700a83ae41c71db645969f23f"} Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.961495 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2271c34-fcda-4312-9e5b-89a96811c0a1" path="/var/lib/kubelet/pods/c2271c34-fcda-4312-9e5b-89a96811c0a1/volumes" Feb 19 09:59:27 crc kubenswrapper[4780]: I0219 09:59:27.962472 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e" path="/var/lib/kubelet/pods/d8ef6dc3-42f7-4a13-acd0-2bb57a68f97e/volumes" Feb 19 09:59:34 crc kubenswrapper[4780]: I0219 09:59:34.043216 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hr7hx"] Feb 19 09:59:34 crc kubenswrapper[4780]: I0219 09:59:34.052929 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hr7hx"] Feb 19 09:59:35 crc kubenswrapper[4780]: I0219 09:59:35.474212 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cf49b6979-xt4hk" podUID="bb0938c1-1dba-442f-ba05-e445bb201c42" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Feb 19 09:59:35 crc kubenswrapper[4780]: I0219 09:59:35.909189 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zmgww" event={"ID":"49d6b6ef-a763-4a2e-ba5b-844d3095ca19","Type":"ContainerStarted","Data":"6bd77726cca6241b9ea9834d9580a6911c0f404f727819faba61a3a6a7ba19e3"} Feb 19 09:59:35 crc kubenswrapper[4780]: I0219 09:59:35.942974 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:59:35 crc kubenswrapper[4780]: E0219 09:59:35.943352 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:59:35 crc kubenswrapper[4780]: I0219 09:59:35.953895 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zmgww" podStartSLOduration=2.8491646299999998 podStartE2EDuration="9.953866692s" podCreationTimestamp="2026-02-19 09:59:26 +0000 UTC" firstStartedPulling="2026-02-19 09:59:27.736140974 +0000 UTC m=+5910.479798423" lastFinishedPulling="2026-02-19 09:59:34.840843046 +0000 UTC m=+5917.584500485" observedRunningTime="2026-02-19 09:59:35.934895795 +0000 UTC m=+5918.678553254" watchObservedRunningTime="2026-02-19 09:59:35.953866692 +0000 UTC m=+5918.697524151" Feb 19 09:59:35 crc kubenswrapper[4780]: I0219 09:59:35.962310 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a4bf86-54ba-4d73-aa81-3f3172bcc365" path="/var/lib/kubelet/pods/c0a4bf86-54ba-4d73-aa81-3f3172bcc365/volumes" Feb 19 09:59:36 crc kubenswrapper[4780]: I0219 09:59:36.934409 4780 generic.go:334] "Generic (PLEG): container finished" podID="49d6b6ef-a763-4a2e-ba5b-844d3095ca19" containerID="6bd77726cca6241b9ea9834d9580a6911c0f404f727819faba61a3a6a7ba19e3" exitCode=0 Feb 19 09:59:36 crc kubenswrapper[4780]: I0219 09:59:36.934537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zmgww" event={"ID":"49d6b6ef-a763-4a2e-ba5b-844d3095ca19","Type":"ContainerDied","Data":"6bd77726cca6241b9ea9834d9580a6911c0f404f727819faba61a3a6a7ba19e3"} Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.408872 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.508596 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxkxw\" (UniqueName: \"kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw\") pod \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.508691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle\") pod \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.508731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data\") pod \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\" (UID: \"49d6b6ef-a763-4a2e-ba5b-844d3095ca19\") " Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.516194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw" (OuterVolumeSpecName: "kube-api-access-jxkxw") pod "49d6b6ef-a763-4a2e-ba5b-844d3095ca19" (UID: "49d6b6ef-a763-4a2e-ba5b-844d3095ca19"). InnerVolumeSpecName "kube-api-access-jxkxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.543497 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49d6b6ef-a763-4a2e-ba5b-844d3095ca19" (UID: "49d6b6ef-a763-4a2e-ba5b-844d3095ca19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.605554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data" (OuterVolumeSpecName: "config-data") pod "49d6b6ef-a763-4a2e-ba5b-844d3095ca19" (UID: "49d6b6ef-a763-4a2e-ba5b-844d3095ca19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.611366 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxkxw\" (UniqueName: \"kubernetes.io/projected/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-kube-api-access-jxkxw\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.611421 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.611437 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d6b6ef-a763-4a2e-ba5b-844d3095ca19-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.969304 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zmgww" event={"ID":"49d6b6ef-a763-4a2e-ba5b-844d3095ca19","Type":"ContainerDied","Data":"fa15fccb462ef3d89b2aee85da9cbb9da4de857700a83ae41c71db645969f23f"} Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.969374 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa15fccb462ef3d89b2aee85da9cbb9da4de857700a83ae41c71db645969f23f" Feb 19 09:59:38 crc kubenswrapper[4780]: I0219 09:59:38.969483 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zmgww" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.162445 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7cbb6c7cf-xmjgd"] Feb 19 09:59:40 crc kubenswrapper[4780]: E0219 09:59:40.165659 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d6b6ef-a763-4a2e-ba5b-844d3095ca19" containerName="heat-db-sync" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.165805 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d6b6ef-a763-4a2e-ba5b-844d3095ca19" containerName="heat-db-sync" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.166244 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d6b6ef-a763-4a2e-ba5b-844d3095ca19" containerName="heat-db-sync" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.167463 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.172733 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.172763 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.173251 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nwzqt" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.184754 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cbb6c7cf-xmjgd"] Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.252201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data-custom\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.252350 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-combined-ca-bundle\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.252484 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.252541 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsp9\" (UniqueName: \"kubernetes.io/projected/dff9e661-9dbc-45a0-8545-63574793fc59-kube-api-access-rcsp9\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.355434 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.355540 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsp9\" (UniqueName: \"kubernetes.io/projected/dff9e661-9dbc-45a0-8545-63574793fc59-kube-api-access-rcsp9\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.355626 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data-custom\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.355701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-combined-ca-bundle\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.367558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data-custom\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.369780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-config-data\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.369882 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-755766ddd-nmcl6"] Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.372551 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.375587 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.384393 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff9e661-9dbc-45a0-8545-63574793fc59-combined-ca-bundle\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.392849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-755766ddd-nmcl6"] Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.393525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsp9\" (UniqueName: \"kubernetes.io/projected/dff9e661-9dbc-45a0-8545-63574793fc59-kube-api-access-rcsp9\") pod \"heat-engine-7cbb6c7cf-xmjgd\" (UID: \"dff9e661-9dbc-45a0-8545-63574793fc59\") " pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.458398 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.458569 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-combined-ca-bundle\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.458716 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d7c68c799-9jfcj"] Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.458777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfx6\" (UniqueName: \"kubernetes.io/projected/cfccb4ce-3da9-487a-b154-ed091e2a0a60-kube-api-access-tlfx6\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.458822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data-custom\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.460499 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.463226 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.480026 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d7c68c799-9jfcj"] Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.490639 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-combined-ca-bundle\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560863 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfx6\" (UniqueName: \"kubernetes.io/projected/cfccb4ce-3da9-487a-b154-ed091e2a0a60-kube-api-access-tlfx6\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data-custom\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data-custom\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.560984 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-combined-ca-bundle\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.561022 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.561068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtnt\" (UniqueName: \"kubernetes.io/projected/1493d17c-fff8-4696-a7e3-b5c686cb2b82-kube-api-access-jwtnt\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.568244 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.570353 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-config-data-custom\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.576911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfccb4ce-3da9-487a-b154-ed091e2a0a60-combined-ca-bundle\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.583928 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfx6\" (UniqueName: \"kubernetes.io/projected/cfccb4ce-3da9-487a-b154-ed091e2a0a60-kube-api-access-tlfx6\") pod \"heat-api-755766ddd-nmcl6\" (UID: \"cfccb4ce-3da9-487a-b154-ed091e2a0a60\") " pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.662632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data-custom\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.662723 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-combined-ca-bundle\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.662798 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwtnt\" (UniqueName: \"kubernetes.io/projected/1493d17c-fff8-4696-a7e3-b5c686cb2b82-kube-api-access-jwtnt\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.662919 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.671158 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.679349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-combined-ca-bundle\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.680608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1493d17c-fff8-4696-a7e3-b5c686cb2b82-config-data-custom\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.685470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwtnt\" (UniqueName: \"kubernetes.io/projected/1493d17c-fff8-4696-a7e3-b5c686cb2b82-kube-api-access-jwtnt\") pod \"heat-cfnapi-d7c68c799-9jfcj\" (UID: \"1493d17c-fff8-4696-a7e3-b5c686cb2b82\") " pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.779761 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:40 crc kubenswrapper[4780]: I0219 09:59:40.787562 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:41 crc kubenswrapper[4780]: I0219 09:59:41.051580 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cbb6c7cf-xmjgd"] Feb 19 09:59:41 crc kubenswrapper[4780]: I0219 09:59:41.333010 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-755766ddd-nmcl6"] Feb 19 09:59:41 crc kubenswrapper[4780]: W0219 09:59:41.344083 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfccb4ce_3da9_487a_b154_ed091e2a0a60.slice/crio-d1d7bad8ae28b16064f68d748400780917c67264f6ce914d546c92aeb365b512 WatchSource:0}: Error finding container d1d7bad8ae28b16064f68d748400780917c67264f6ce914d546c92aeb365b512: Status 404 returned error can't find the container with id d1d7bad8ae28b16064f68d748400780917c67264f6ce914d546c92aeb365b512 Feb 19 09:59:41 crc kubenswrapper[4780]: W0219 09:59:41.350998 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1493d17c_fff8_4696_a7e3_b5c686cb2b82.slice/crio-047d1441deae43a69a1733db7901ca489641e0cedc95145a3c70b0aea8b96d63 WatchSource:0}: Error finding container 047d1441deae43a69a1733db7901ca489641e0cedc95145a3c70b0aea8b96d63: Status 404 returned error can't find the container with id 047d1441deae43a69a1733db7901ca489641e0cedc95145a3c70b0aea8b96d63 Feb 19 09:59:41 crc kubenswrapper[4780]: I0219 09:59:41.351391 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d7c68c799-9jfcj"] Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.027174 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-755766ddd-nmcl6" event={"ID":"cfccb4ce-3da9-487a-b154-ed091e2a0a60","Type":"ContainerStarted","Data":"d1d7bad8ae28b16064f68d748400780917c67264f6ce914d546c92aeb365b512"} Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.028853 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" event={"ID":"1493d17c-fff8-4696-a7e3-b5c686cb2b82","Type":"ContainerStarted","Data":"047d1441deae43a69a1733db7901ca489641e0cedc95145a3c70b0aea8b96d63"} Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.033342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" event={"ID":"dff9e661-9dbc-45a0-8545-63574793fc59","Type":"ContainerStarted","Data":"5d1c7e032c3d94ddf687fe36fa3cb01b54c6b8654f9a14be3ba7ce539d2186e1"} Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.033523 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" event={"ID":"dff9e661-9dbc-45a0-8545-63574793fc59","Type":"ContainerStarted","Data":"15d44101fbe713f8c2023ab00cd41fb98bf95408c611537f69e04cd164f95bd3"} Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.033585 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 09:59:42 crc kubenswrapper[4780]: I0219 09:59:42.061062 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" podStartSLOduration=2.061036997 podStartE2EDuration="2.061036997s" podCreationTimestamp="2026-02-19 09:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:59:42.05788931 +0000 UTC m=+5924.801546759" watchObservedRunningTime="2026-02-19 09:59:42.061036997 +0000 UTC m=+5924.804694446" Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.088260 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-755766ddd-nmcl6" event={"ID":"cfccb4ce-3da9-487a-b154-ed091e2a0a60","Type":"ContainerStarted","Data":"bde5c50a0597ddf0e907d0a5345c9aa59859acc43c2b1a65a36f7b6b71e1e9c3"} Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.089997 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.096859 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" event={"ID":"1493d17c-fff8-4696-a7e3-b5c686cb2b82","Type":"ContainerStarted","Data":"dae2533fc0e7665f5983e38c3326b8d9e563122f71ec719ee64313cd2769480e"} Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.097253 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.127167 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-755766ddd-nmcl6" podStartSLOduration=1.905109474 podStartE2EDuration="4.127118875s" podCreationTimestamp="2026-02-19 09:59:40 +0000 UTC" firstStartedPulling="2026-02-19 09:59:41.347186606 +0000 UTC m=+5924.090844055" lastFinishedPulling="2026-02-19 09:59:43.569196017 +0000 UTC m=+5926.312853456" observedRunningTime="2026-02-19 09:59:44.109961183 +0000 UTC m=+5926.853618622" watchObservedRunningTime="2026-02-19 09:59:44.127118875 +0000 UTC m=+5926.870776324" Feb 19 09:59:44 crc kubenswrapper[4780]: I0219 09:59:44.142674 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" podStartSLOduration=1.917485798 podStartE2EDuration="4.142647697s" podCreationTimestamp="2026-02-19 09:59:40 +0000 UTC" firstStartedPulling="2026-02-19 09:59:41.354246109 +0000 UTC m=+5924.097903558" lastFinishedPulling="2026-02-19 09:59:43.579408008 +0000 UTC m=+5926.323065457" observedRunningTime="2026-02-19 09:59:44.135696746 +0000 UTC m=+5926.879354205" watchObservedRunningTime="2026-02-19 09:59:44.142647697 +0000 UTC m=+5926.886305146" Feb 19 09:59:46 crc kubenswrapper[4780]: I0219 09:59:46.939008 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:59:46 crc kubenswrapper[4780]: E0219 09:59:46.940594 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 09:59:47 crc kubenswrapper[4780]: I0219 09:59:47.628625 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:49 crc kubenswrapper[4780]: I0219 09:59:49.540963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cf49b6979-xt4hk" Feb 19 09:59:49 crc kubenswrapper[4780]: I0219 09:59:49.675849 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 09:59:49 crc kubenswrapper[4780]: I0219 09:59:49.676133 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon-log" containerID="cri-o://5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af" gracePeriod=30 Feb 19 09:59:49 crc kubenswrapper[4780]: I0219 09:59:49.676361 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" containerID="cri-o://cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5" gracePeriod=30 Feb 19 09:59:52 crc kubenswrapper[4780]: I0219 09:59:52.182864 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-755766ddd-nmcl6" Feb 19 09:59:52 crc kubenswrapper[4780]: I0219 09:59:52.218899 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d7c68c799-9jfcj" Feb 19 09:59:54 crc kubenswrapper[4780]: I0219 09:59:54.232746 4780 generic.go:334] "Generic (PLEG): container finished" podID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerID="cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5" exitCode=0 Feb 19 09:59:54 crc kubenswrapper[4780]: I0219 09:59:54.232805 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerDied","Data":"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5"} Feb 19 09:59:58 crc kubenswrapper[4780]: I0219 09:59:58.341672 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Feb 19 09:59:59 crc kubenswrapper[4780]: I0219 09:59:59.938666 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 09:59:59 crc kubenswrapper[4780]: E0219 09:59:59.939463 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.162096 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2"] Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.170860 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.175210 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.178239 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.211743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2"] Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.281945 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.282312 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhbl\" (UniqueName: \"kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.282391 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.387184 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhbl\" (UniqueName: \"kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.387568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.387729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.389277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.397970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.409849 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhbl\" (UniqueName: \"kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl\") pod \"collect-profiles-29524920-grkl2\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.513847 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:00 crc kubenswrapper[4780]: I0219 10:00:00.532096 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7cbb6c7cf-xmjgd" Feb 19 10:00:01 crc kubenswrapper[4780]: I0219 10:00:01.103930 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2"] Feb 19 10:00:01 crc kubenswrapper[4780]: W0219 10:00:01.112614 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a90e65_1710_4860_a8be_c6cbc9423096.slice/crio-4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907 WatchSource:0}: Error finding container 4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907: Status 404 returned error can't find the container with id 4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907 Feb 19 10:00:01 crc kubenswrapper[4780]: I0219 10:00:01.327541 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" event={"ID":"a4a90e65-1710-4860-a8be-c6cbc9423096","Type":"ContainerStarted","Data":"4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907"} Feb 19 10:00:02 crc kubenswrapper[4780]: I0219 10:00:02.337658 4780 generic.go:334] "Generic (PLEG): container finished" podID="a4a90e65-1710-4860-a8be-c6cbc9423096" containerID="96f06b75d9840b18dd2109d1e75629ec9586a7e2ebf246f249aab932a50fc5a8" exitCode=0 Feb 19 10:00:02 crc kubenswrapper[4780]: I0219 10:00:02.337742 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" event={"ID":"a4a90e65-1710-4860-a8be-c6cbc9423096","Type":"ContainerDied","Data":"96f06b75d9840b18dd2109d1e75629ec9586a7e2ebf246f249aab932a50fc5a8"} Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.841598 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.982496 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume\") pod \"a4a90e65-1710-4860-a8be-c6cbc9423096\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.983086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume\") pod \"a4a90e65-1710-4860-a8be-c6cbc9423096\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.983209 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnhbl\" (UniqueName: \"kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl\") pod \"a4a90e65-1710-4860-a8be-c6cbc9423096\" (UID: \"a4a90e65-1710-4860-a8be-c6cbc9423096\") " Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.983864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4a90e65-1710-4860-a8be-c6cbc9423096" (UID: "a4a90e65-1710-4860-a8be-c6cbc9423096"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4780]: I0219 10:00:03.995257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4a90e65-1710-4860-a8be-c6cbc9423096" (UID: "a4a90e65-1710-4860-a8be-c6cbc9423096"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.000734 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl" (OuterVolumeSpecName: "kube-api-access-rnhbl") pod "a4a90e65-1710-4860-a8be-c6cbc9423096" (UID: "a4a90e65-1710-4860-a8be-c6cbc9423096"). InnerVolumeSpecName "kube-api-access-rnhbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.103735 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a90e65-1710-4860-a8be-c6cbc9423096-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.103804 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a90e65-1710-4860-a8be-c6cbc9423096-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.103821 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnhbl\" (UniqueName: \"kubernetes.io/projected/a4a90e65-1710-4860-a8be-c6cbc9423096-kube-api-access-rnhbl\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.359573 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" event={"ID":"a4a90e65-1710-4860-a8be-c6cbc9423096","Type":"ContainerDied","Data":"4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907"} Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.359623 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4777f256dffabeec113f024c9e363c4c853acf00aa30de2b35eebe22d4319907" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.359659 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2" Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.947009 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn"] Feb 19 10:00:04 crc kubenswrapper[4780]: I0219 10:00:04.960518 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524875-pg6cn"] Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.037795 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8705-account-create-update-c4zbl"] Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.048445 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-w6tg6"] Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.056311 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8705-account-create-update-c4zbl"] Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.063821 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-w6tg6"] Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.948940 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d67ee36-1d79-4e07-84e1-ab2393377346" path="/var/lib/kubelet/pods/3d67ee36-1d79-4e07-84e1-ab2393377346/volumes" Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.950727 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4800ec-5ca3-46ef-bce7-ed31e16d6716" path="/var/lib/kubelet/pods/5f4800ec-5ca3-46ef-bce7-ed31e16d6716/volumes" Feb 19 10:00:05 crc kubenswrapper[4780]: I0219 10:00:05.951400 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f8f67a-63a8-4db2-819b-8296f36210f6" path="/var/lib/kubelet/pods/c9f8f67a-63a8-4db2-819b-8296f36210f6/volumes" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.465715 4780 scope.go:117] "RemoveContainer" containerID="913d93ff2bf837ee2b10568ce28d685dd3937c80f338043d48e170e8947499fc" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.496714 4780 scope.go:117] "RemoveContainer" containerID="3ccdba25532a400b8cbfddda2c6dfac298db8ed1abd6301a8b253a222b9c7f16" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.553183 4780 scope.go:117] "RemoveContainer" containerID="b969040121fbe604ea496501584b5b2b4b82385b1b063ad607832596c5b81d3a" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.594496 4780 scope.go:117] "RemoveContainer" containerID="f5c42dae23ffbe4bf90a1fa011fa0138106ae185fb95e6c7b84b2938e26c8e9c" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.630981 4780 scope.go:117] "RemoveContainer" containerID="9c09a31da98967b19cfe5486935febcfe62d898b9675671c01d5bca9860a3521" Feb 19 10:00:07 crc kubenswrapper[4780]: I0219 10:00:07.686838 4780 scope.go:117] "RemoveContainer" containerID="99fa089a114406875a6e3ab292d03a1b445c66603e4d8c2bd1a47456c624ffae" Feb 19 10:00:08 crc kubenswrapper[4780]: I0219 10:00:08.342028 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Feb 19 10:00:12 crc kubenswrapper[4780]: I0219 10:00:12.037102 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-z6n6g"] Feb 19 10:00:12 crc kubenswrapper[4780]: I0219 10:00:12.052104 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-z6n6g"] Feb 19 10:00:13 crc kubenswrapper[4780]: I0219 10:00:13.950181 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd66800-94d4-455d-b1da-1f262385b717" path="/var/lib/kubelet/pods/7cd66800-94d4-455d-b1da-1f262385b717/volumes" Feb 19 10:00:14 crc kubenswrapper[4780]: I0219 10:00:14.939395 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 10:00:14 crc kubenswrapper[4780]: E0219 10:00:14.939717 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:00:18 crc kubenswrapper[4780]: I0219 10:00:18.341779 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cb769fd5c-8qgdv" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Feb 19 10:00:18 crc kubenswrapper[4780]: I0219 10:00:18.342545 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.214882 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.292181 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.292295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.292574 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.292683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgkvj\" (UniqueName: \"kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.292726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.294313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs" (OuterVolumeSpecName: "logs") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.301978 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj" (OuterVolumeSpecName: "kube-api-access-pgkvj") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3"). InnerVolumeSpecName "kube-api-access-pgkvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.303191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:00:20 crc kubenswrapper[4780]: E0219 10:00:20.325302 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data podName:638cca61-9d4e-4812-9765-b8b24b72b7d3 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:20.825267029 +0000 UTC m=+5963.568924478 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3") : error deleting /var/lib/kubelet/pods/638cca61-9d4e-4812-9765-b8b24b72b7d3/volume-subpaths: remove /var/lib/kubelet/pods/638cca61-9d4e-4812-9765-b8b24b72b7d3/volume-subpaths: no such file or directory Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.325865 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts" (OuterVolumeSpecName: "scripts") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.397980 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgkvj\" (UniqueName: \"kubernetes.io/projected/638cca61-9d4e-4812-9765-b8b24b72b7d3-kube-api-access-pgkvj\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.398021 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/638cca61-9d4e-4812-9765-b8b24b72b7d3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.398037 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.398049 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/638cca61-9d4e-4812-9765-b8b24b72b7d3-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.575753 4780 generic.go:334] "Generic (PLEG): container finished" podID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerID="5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af" exitCode=137 Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.575804 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerDied","Data":"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af"} Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.575868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cb769fd5c-8qgdv" event={"ID":"638cca61-9d4e-4812-9765-b8b24b72b7d3","Type":"ContainerDied","Data":"ec1e918a2621215285820e4c128582d26ed4a50d6d563dd706c5edf4d5be94f0"} Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.575893 4780 scope.go:117] "RemoveContainer" containerID="cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.575899 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cb769fd5c-8qgdv" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.725649 4780 scope.go:117] "RemoveContainer" containerID="5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.750741 4780 scope.go:117] "RemoveContainer" containerID="cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5" Feb 19 10:00:20 crc kubenswrapper[4780]: E0219 10:00:20.751419 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5\": container with ID starting with cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5 not found: ID does not exist" containerID="cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.751479 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5"} err="failed to get container status \"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5\": rpc error: code = NotFound desc = could not find container \"cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5\": container with ID starting with cf30258268fba9c8f490dccd67cef17c7d53c262d4f65a3aa54cf75fd2a234e5 not found: ID does not exist" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.751513 4780 scope.go:117] "RemoveContainer" containerID="5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af" Feb 19 10:00:20 crc kubenswrapper[4780]: E0219 10:00:20.751884 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af\": container with ID starting with 5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af not found: ID does not exist" containerID="5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.751914 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af"} err="failed to get container status \"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af\": rpc error: code = NotFound desc = could not find container \"5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af\": container with ID starting with 5e48af438c0f836f1e28092d7cfff210d167ec31adb7cb2482b6f33af0c969af not found: ID does not exist" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.910286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") pod \"638cca61-9d4e-4812-9765-b8b24b72b7d3\" (UID: \"638cca61-9d4e-4812-9765-b8b24b72b7d3\") " Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.910931 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data" (OuterVolumeSpecName: "config-data") pod "638cca61-9d4e-4812-9765-b8b24b72b7d3" (UID: "638cca61-9d4e-4812-9765-b8b24b72b7d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:20 crc kubenswrapper[4780]: I0219 10:00:20.911093 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638cca61-9d4e-4812-9765-b8b24b72b7d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:21 crc kubenswrapper[4780]: I0219 10:00:21.213801 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 10:00:21 crc kubenswrapper[4780]: I0219 10:00:21.225958 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cb769fd5c-8qgdv"] Feb 19 10:00:21 crc kubenswrapper[4780]: I0219 10:00:21.961652 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" path="/var/lib/kubelet/pods/638cca61-9d4e-4812-9765-b8b24b72b7d3/volumes" Feb 19 10:00:29 crc kubenswrapper[4780]: I0219 10:00:29.938357 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 10:00:29 crc kubenswrapper[4780]: E0219 10:00:29.939447 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.847363 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd"] Feb 19 10:00:42 crc kubenswrapper[4780]: E0219 10:00:42.848603 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon-log" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.848626 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon-log" Feb 19 10:00:42 crc kubenswrapper[4780]: E0219 10:00:42.848657 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.848669 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" Feb 19 10:00:42 crc kubenswrapper[4780]: E0219 10:00:42.848714 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a90e65-1710-4860-a8be-c6cbc9423096" containerName="collect-profiles" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.848727 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a90e65-1710-4860-a8be-c6cbc9423096" containerName="collect-profiles" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.849263 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon-log" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.849310 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a90e65-1710-4860-a8be-c6cbc9423096" containerName="collect-profiles" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.849334 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="638cca61-9d4e-4812-9765-b8b24b72b7d3" containerName="horizon" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.855224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.862648 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.867049 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd"] Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.933496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.933589 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfvp\" (UniqueName: \"kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.933985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:42 crc kubenswrapper[4780]: I0219 10:00:42.938297 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.036638 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.036721 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfvp\" (UniqueName: \"kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.036864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.037463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.037530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.069049 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfvp\" (UniqueName: \"kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.187699 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.693193 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd"] Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.828154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerStarted","Data":"006b1fb002437939c314eb5ec9bf487745216bc132ca52561b639d1e68002422"} Feb 19 10:00:43 crc kubenswrapper[4780]: I0219 10:00:43.831213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593"} Feb 19 10:00:44 crc kubenswrapper[4780]: I0219 10:00:44.846082 4780 generic.go:334] "Generic (PLEG): container finished" podID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerID="d4c44f96b46e1e6e12cae0d853b593b2e3263586d55128192c0e0e4819f86778" exitCode=0 Feb 19 10:00:44 crc kubenswrapper[4780]: I0219 10:00:44.846201 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerDied","Data":"d4c44f96b46e1e6e12cae0d853b593b2e3263586d55128192c0e0e4819f86778"} Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.125081 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.129655 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.149052 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.291953 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.292117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55zw\" (UniqueName: \"kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.292186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.395073 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55zw\" (UniqueName: \"kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.395221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.395356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.396300 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.397607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.421527 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55zw\" (UniqueName: \"kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw\") pod \"redhat-operators-pz6lq\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:45 crc kubenswrapper[4780]: I0219 10:00:45.820788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:46 crc kubenswrapper[4780]: I0219 10:00:46.364058 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:00:46 crc kubenswrapper[4780]: W0219 10:00:46.374942 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6df687d_fedc_400e_bf1d_50fb5be4033c.slice/crio-c3127d219274a031d5d4c2b0502836897fe8c1afe2385515d3e39e358a2fb96e WatchSource:0}: Error finding container c3127d219274a031d5d4c2b0502836897fe8c1afe2385515d3e39e358a2fb96e: Status 404 returned error can't find the container with id c3127d219274a031d5d4c2b0502836897fe8c1afe2385515d3e39e358a2fb96e Feb 19 10:00:46 crc kubenswrapper[4780]: I0219 10:00:46.898211 4780 generic.go:334] "Generic (PLEG): container finished" podID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerID="a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308" exitCode=0 Feb 19 10:00:46 crc kubenswrapper[4780]: I0219 10:00:46.898293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerDied","Data":"a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308"} Feb 19 10:00:46 crc kubenswrapper[4780]: I0219 10:00:46.898660 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerStarted","Data":"c3127d219274a031d5d4c2b0502836897fe8c1afe2385515d3e39e358a2fb96e"} Feb 19 10:00:47 crc kubenswrapper[4780]: I0219 10:00:47.915822 4780 generic.go:334] "Generic (PLEG): container finished" podID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerID="b9c213cf129d561dd79d02135154b70072e20bef07109abc54c8a5135fff0935" exitCode=0 Feb 19 10:00:47 crc kubenswrapper[4780]: I0219 10:00:47.915929 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerDied","Data":"b9c213cf129d561dd79d02135154b70072e20bef07109abc54c8a5135fff0935"} Feb 19 10:00:48 crc kubenswrapper[4780]: I0219 10:00:48.928560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerStarted","Data":"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8"} Feb 19 10:00:48 crc kubenswrapper[4780]: I0219 10:00:48.934168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerStarted","Data":"d3b1c0d677841a9ee6b1c6d9b455d03c90ef411e3e55f658a3cd6a552d12ecbb"} Feb 19 10:00:49 crc kubenswrapper[4780]: I0219 10:00:49.006971 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" podStartSLOduration=4.77865404 podStartE2EDuration="7.006948097s" podCreationTimestamp="2026-02-19 10:00:42 +0000 UTC" firstStartedPulling="2026-02-19 10:00:44.849478639 +0000 UTC m=+5987.593136088" lastFinishedPulling="2026-02-19 10:00:47.077772696 +0000 UTC m=+5989.821430145" observedRunningTime="2026-02-19 10:00:49.000941059 +0000 UTC m=+5991.744598518" watchObservedRunningTime="2026-02-19 10:00:49.006948097 +0000 UTC m=+5991.750605556" Feb 19 10:00:50 crc kubenswrapper[4780]: I0219 10:00:50.965601 4780 generic.go:334] "Generic (PLEG): container finished" podID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerID="d3b1c0d677841a9ee6b1c6d9b455d03c90ef411e3e55f658a3cd6a552d12ecbb" exitCode=0 Feb 19 10:00:50 crc kubenswrapper[4780]: I0219 10:00:50.965741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerDied","Data":"d3b1c0d677841a9ee6b1c6d9b455d03c90ef411e3e55f658a3cd6a552d12ecbb"} Feb 19 10:00:50 crc kubenswrapper[4780]: I0219 10:00:50.972244 4780 generic.go:334] "Generic (PLEG): container finished" podID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerID="ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8" exitCode=0 Feb 19 10:00:50 crc kubenswrapper[4780]: I0219 10:00:50.972296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerDied","Data":"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8"} Feb 19 10:00:51 crc kubenswrapper[4780]: I0219 10:00:51.992718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerStarted","Data":"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e"} Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.025277 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pz6lq" podStartSLOduration=2.536946964 podStartE2EDuration="7.025252457s" podCreationTimestamp="2026-02-19 10:00:45 +0000 UTC" firstStartedPulling="2026-02-19 10:00:47.026641068 +0000 UTC m=+5989.770298527" lastFinishedPulling="2026-02-19 10:00:51.514946531 +0000 UTC m=+5994.258604020" observedRunningTime="2026-02-19 10:00:52.019442494 +0000 UTC m=+5994.763099993" watchObservedRunningTime="2026-02-19 10:00:52.025252457 +0000 UTC m=+5994.768909906" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.415641 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.574907 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle\") pod \"ab6a0221-99a2-41ea-817f-391f39843ea8\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.575080 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util\") pod \"ab6a0221-99a2-41ea-817f-391f39843ea8\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.575141 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfvp\" (UniqueName: \"kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp\") pod \"ab6a0221-99a2-41ea-817f-391f39843ea8\" (UID: \"ab6a0221-99a2-41ea-817f-391f39843ea8\") " Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.576815 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle" (OuterVolumeSpecName: "bundle") pod "ab6a0221-99a2-41ea-817f-391f39843ea8" (UID: "ab6a0221-99a2-41ea-817f-391f39843ea8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.585657 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util" (OuterVolumeSpecName: "util") pod "ab6a0221-99a2-41ea-817f-391f39843ea8" (UID: "ab6a0221-99a2-41ea-817f-391f39843ea8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.597463 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp" (OuterVolumeSpecName: "kube-api-access-5bfvp") pod "ab6a0221-99a2-41ea-817f-391f39843ea8" (UID: "ab6a0221-99a2-41ea-817f-391f39843ea8"). InnerVolumeSpecName "kube-api-access-5bfvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.678437 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.678799 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab6a0221-99a2-41ea-817f-391f39843ea8-util\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:52 crc kubenswrapper[4780]: I0219 10:00:52.678931 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bfvp\" (UniqueName: \"kubernetes.io/projected/ab6a0221-99a2-41ea-817f-391f39843ea8-kube-api-access-5bfvp\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:53 crc kubenswrapper[4780]: I0219 10:00:53.010533 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" event={"ID":"ab6a0221-99a2-41ea-817f-391f39843ea8","Type":"ContainerDied","Data":"006b1fb002437939c314eb5ec9bf487745216bc132ca52561b639d1e68002422"} Feb 19 10:00:53 crc kubenswrapper[4780]: I0219 10:00:53.010607 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006b1fb002437939c314eb5ec9bf487745216bc132ca52561b639d1e68002422" Feb 19 10:00:53 crc kubenswrapper[4780]: I0219 10:00:53.010737 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd" Feb 19 10:00:55 crc kubenswrapper[4780]: I0219 10:00:55.822244 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:55 crc kubenswrapper[4780]: I0219 10:00:55.826175 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:00:56 crc kubenswrapper[4780]: I0219 10:00:56.908635 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:00:56 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:00:56 crc kubenswrapper[4780]: > Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.296301 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524921-r86qv"] Feb 19 10:01:00 crc kubenswrapper[4780]: E0219 10:01:00.297535 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="util" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.297553 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="util" Feb 19 10:01:00 crc kubenswrapper[4780]: E0219 10:01:00.297588 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="extract" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.297621 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="extract" Feb 19 10:01:00 crc kubenswrapper[4780]: E0219 10:01:00.297651 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="pull" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.297658 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="pull" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.297908 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6a0221-99a2-41ea-817f-391f39843ea8" containerName="extract" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.298814 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.319860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524921-r86qv"] Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.475054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvrxb\" (UniqueName: \"kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.475419 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.475466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.475487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.576710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvrxb\" (UniqueName: \"kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.576811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.576854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.576871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.586389 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.594972 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.598548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.598647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvrxb\" (UniqueName: \"kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb\") pod \"keystone-cron-29524921-r86qv\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:00 crc kubenswrapper[4780]: I0219 10:01:00.657069 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:01 crc kubenswrapper[4780]: I0219 10:01:01.247406 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524921-r86qv"] Feb 19 10:01:02 crc kubenswrapper[4780]: I0219 10:01:02.111756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524921-r86qv" event={"ID":"77b6fd2d-36a9-4498-a369-266c0b665a28","Type":"ContainerStarted","Data":"17624bc3bce7fea8a7f1bfab9128c3782803829049d86125f0e0828ccd299631"} Feb 19 10:01:02 crc kubenswrapper[4780]: I0219 10:01:02.112624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524921-r86qv" event={"ID":"77b6fd2d-36a9-4498-a369-266c0b665a28","Type":"ContainerStarted","Data":"985956d746e13a54d117ee14bbb3eb36bd415acd4b69460a499fc75ede98002d"} Feb 19 10:01:02 crc kubenswrapper[4780]: I0219 10:01:02.148055 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524921-r86qv" podStartSLOduration=2.148021584 podStartE2EDuration="2.148021584s" podCreationTimestamp="2026-02-19 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:02.13727495 +0000 UTC m=+6004.880932419" watchObservedRunningTime="2026-02-19 10:01:02.148021584 +0000 UTC m=+6004.891679033" Feb 19 10:01:06 crc kubenswrapper[4780]: I0219 10:01:06.905953 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:06 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:06 crc kubenswrapper[4780]: > Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.177252 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.179429 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.181404 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.182052 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.197886 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.198637 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-s5l25" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.238555 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjb2\" (UniqueName: \"kubernetes.io/projected/43b00acb-5aa9-4e89-8eaf-43217205623b-kube-api-access-jbjb2\") pod \"obo-prometheus-operator-68bc856cb9-mjrgp\" (UID: \"43b00acb-5aa9-4e89-8eaf-43217205623b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.355310 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.358114 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.358565 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjb2\" (UniqueName: \"kubernetes.io/projected/43b00acb-5aa9-4e89-8eaf-43217205623b-kube-api-access-jbjb2\") pod \"obo-prometheus-operator-68bc856cb9-mjrgp\" (UID: \"43b00acb-5aa9-4e89-8eaf-43217205623b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.362755 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-59b6z" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.363700 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.409056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjb2\" (UniqueName: \"kubernetes.io/projected/43b00acb-5aa9-4e89-8eaf-43217205623b-kube-api-access-jbjb2\") pod \"obo-prometheus-operator-68bc856cb9-mjrgp\" (UID: \"43b00acb-5aa9-4e89-8eaf-43217205623b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.418647 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.423755 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.460599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.460709 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.472428 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.511278 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.512836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.562278 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.562347 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.562465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.562514 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.570755 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.574602 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/def2ec98-a720-44e2-b7ae-e4b917a073e0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm\" (UID: \"def2ec98-a720-44e2-b7ae-e4b917a073e0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.583968 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4fxhv"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.587585 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.590538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4fxhv"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.600710 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-lnj6k" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.621950 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.664589 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.664695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.664756 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.664820 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5fr\" (UniqueName: \"kubernetes.io/projected/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-kube-api-access-lc5fr\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.684957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.685326 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d5cb621-e571-4a9c-b564-f9ce5b07295f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g\" (UID: \"9d5cb621-e571-4a9c-b564-f9ce5b07295f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.701634 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.768324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.768466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5fr\" (UniqueName: \"kubernetes.io/projected/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-kube-api-access-lc5fr\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.784573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.793224 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nx8q8"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.795787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.799616 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-z8fxf" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.806812 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.820273 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5fr\" (UniqueName: \"kubernetes.io/projected/df4e54f0-d1df-44af-ba63-0e8a6791a6d0-kube-api-access-lc5fr\") pod \"observability-operator-59bdc8b94-4fxhv\" (UID: \"df4e54f0-d1df-44af-ba63-0e8a6791a6d0\") " pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.838499 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nx8q8"] Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.852789 4780 scope.go:117] "RemoveContainer" containerID="39de7f44f4703ce0130ec0e64869ef912ab7b085b71776c2e157ce821fd3e4f4" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.872515 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.872871 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfptw\" (UniqueName: \"kubernetes.io/projected/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-kube-api-access-tfptw\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.975228 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.975305 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfptw\" (UniqueName: \"kubernetes.io/projected/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-kube-api-access-tfptw\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:07 crc kubenswrapper[4780]: I0219 10:01:07.978050 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.010281 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfptw\" (UniqueName: \"kubernetes.io/projected/f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a-kube-api-access-tfptw\") pod \"perses-operator-5bf474d74f-nx8q8\" (UID: \"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a\") " pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.109273 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.160692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.218831 4780 generic.go:334] "Generic (PLEG): container finished" podID="77b6fd2d-36a9-4498-a369-266c0b665a28" containerID="17624bc3bce7fea8a7f1bfab9128c3782803829049d86125f0e0828ccd299631" exitCode=0 Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.218895 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524921-r86qv" event={"ID":"77b6fd2d-36a9-4498-a369-266c0b665a28","Type":"ContainerDied","Data":"17624bc3bce7fea8a7f1bfab9128c3782803829049d86125f0e0828ccd299631"} Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.230787 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp"] Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.544613 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm"] Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.750744 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g"] Feb 19 10:01:08 crc kubenswrapper[4780]: W0219 10:01:08.770495 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d5cb621_e571_4a9c_b564_f9ce5b07295f.slice/crio-f0e581c41eec61521e5e2a3b946470bf9311a0a4b7a915d3f29195449e1816b6 WatchSource:0}: Error finding container f0e581c41eec61521e5e2a3b946470bf9311a0a4b7a915d3f29195449e1816b6: Status 404 returned error can't find the container with id f0e581c41eec61521e5e2a3b946470bf9311a0a4b7a915d3f29195449e1816b6 Feb 19 10:01:08 crc kubenswrapper[4780]: I0219 10:01:08.985844 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4fxhv"] Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.236821 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" event={"ID":"def2ec98-a720-44e2-b7ae-e4b917a073e0","Type":"ContainerStarted","Data":"2e7d3b4f8baf1033463d204dbc7701202371a13897ea38fa966f9a72f0e7678d"} Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.241070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" event={"ID":"43b00acb-5aa9-4e89-8eaf-43217205623b","Type":"ContainerStarted","Data":"8717f394be2d08896b1088a166b87be547a2ae0bd8d86824c0e116a309e658e4"} Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.248752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" event={"ID":"df4e54f0-d1df-44af-ba63-0e8a6791a6d0","Type":"ContainerStarted","Data":"0da22da40e5f1e2c8d7abe36eecbc7467adbfb71d5ffad842894e18fcc614719"} Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.251292 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" event={"ID":"9d5cb621-e571-4a9c-b564-f9ce5b07295f","Type":"ContainerStarted","Data":"f0e581c41eec61521e5e2a3b946470bf9311a0a4b7a915d3f29195449e1816b6"} Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.323261 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nx8q8"] Feb 19 10:01:09 crc kubenswrapper[4780]: W0219 10:01:09.332268 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65a5e3a_d2c6_42fd_82fa_f9e8d3f3ae1a.slice/crio-4aee5da0d5bd09446018f19300f9e833811335570e5ba528dc3d341cd3d13999 WatchSource:0}: Error finding container 4aee5da0d5bd09446018f19300f9e833811335570e5ba528dc3d341cd3d13999: Status 404 returned error can't find the container with id 4aee5da0d5bd09446018f19300f9e833811335570e5ba528dc3d341cd3d13999 Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.338481 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:01:09 crc kubenswrapper[4780]: I0219 10:01:09.942017 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.079098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data\") pod \"77b6fd2d-36a9-4498-a369-266c0b665a28\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.080252 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys\") pod \"77b6fd2d-36a9-4498-a369-266c0b665a28\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.080374 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvrxb\" (UniqueName: \"kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb\") pod \"77b6fd2d-36a9-4498-a369-266c0b665a28\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.080471 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle\") pod \"77b6fd2d-36a9-4498-a369-266c0b665a28\" (UID: \"77b6fd2d-36a9-4498-a369-266c0b665a28\") " Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.114561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb" (OuterVolumeSpecName: "kube-api-access-nvrxb") pod "77b6fd2d-36a9-4498-a369-266c0b665a28" (UID: "77b6fd2d-36a9-4498-a369-266c0b665a28"). InnerVolumeSpecName "kube-api-access-nvrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.118582 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77b6fd2d-36a9-4498-a369-266c0b665a28" (UID: "77b6fd2d-36a9-4498-a369-266c0b665a28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.137909 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b6fd2d-36a9-4498-a369-266c0b665a28" (UID: "77b6fd2d-36a9-4498-a369-266c0b665a28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.183620 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.183675 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvrxb\" (UniqueName: \"kubernetes.io/projected/77b6fd2d-36a9-4498-a369-266c0b665a28-kube-api-access-nvrxb\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.183686 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.224248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data" (OuterVolumeSpecName: "config-data") pod "77b6fd2d-36a9-4498-a369-266c0b665a28" (UID: "77b6fd2d-36a9-4498-a369-266c0b665a28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.286095 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd2d-36a9-4498-a369-266c0b665a28-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.293736 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524921-r86qv" event={"ID":"77b6fd2d-36a9-4498-a369-266c0b665a28","Type":"ContainerDied","Data":"985956d746e13a54d117ee14bbb3eb36bd415acd4b69460a499fc75ede98002d"} Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.293784 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985956d746e13a54d117ee14bbb3eb36bd415acd4b69460a499fc75ede98002d" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.293859 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524921-r86qv" Feb 19 10:01:10 crc kubenswrapper[4780]: I0219 10:01:10.296869 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" event={"ID":"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a","Type":"ContainerStarted","Data":"4aee5da0d5bd09446018f19300f9e833811335570e5ba528dc3d341cd3d13999"} Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.055188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2g2jr"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.079422 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xwq24"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.103309 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-59ad-account-create-update-mmkdh"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.129433 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2g2jr"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.140694 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xwq24"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.164195 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-59ad-account-create-update-mmkdh"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.171311 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b504-account-create-update-45jzf"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.180992 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b504-account-create-update-45jzf"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.192415 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2tk28"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.205986 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-eddf-account-create-update-qthjk"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.213844 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-eddf-account-create-update-qthjk"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.236990 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2tk28"] Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.953395 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5a3587-4422-416d-abda-b793c782e693" path="/var/lib/kubelet/pods/0c5a3587-4422-416d-abda-b793c782e693/volumes" Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.985643 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cdced8-eb90-4931-b0c5-7a6d803aeef8" path="/var/lib/kubelet/pods/42cdced8-eb90-4931-b0c5-7a6d803aeef8/volumes" Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.986996 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f964524-b75b-44cf-b10c-82ee59a8b98a" path="/var/lib/kubelet/pods/8f964524-b75b-44cf-b10c-82ee59a8b98a/volumes" Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.990441 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49e45b6-af97-4283-9398-af6a2b81f11f" path="/var/lib/kubelet/pods/b49e45b6-af97-4283-9398-af6a2b81f11f/volumes" Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.995254 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fd5efa-d635-492a-9f0d-03bc75547d9f" path="/var/lib/kubelet/pods/b4fd5efa-d635-492a-9f0d-03bc75547d9f/volumes" Feb 19 10:01:13 crc kubenswrapper[4780]: I0219 10:01:13.998001 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4" path="/var/lib/kubelet/pods/bf30ded4-17c8-4c48-b5b5-aa689cd4b8d4/volumes" Feb 19 10:01:16 crc kubenswrapper[4780]: I0219 10:01:16.877946 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:16 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:16 crc kubenswrapper[4780]: > Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.488810 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" event={"ID":"f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a","Type":"ContainerStarted","Data":"0b35cc3643d90ad52d1719742b872251de1bcd4f8f35e6e48a10021aa059dbbf"} Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.489626 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.491843 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" event={"ID":"df4e54f0-d1df-44af-ba63-0e8a6791a6d0","Type":"ContainerStarted","Data":"e44e616624b623f45268d221d5f30fbb2278a4cc8fab26229b8d0fec004b9ae9"} Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.492075 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.493898 4780 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-4fxhv container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.123:8081/healthz\": dial tcp 10.217.1.123:8081: connect: connection refused" start-of-body= Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.493967 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" podUID="df4e54f0-d1df-44af-ba63-0e8a6791a6d0" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.123:8081/healthz\": dial tcp 10.217.1.123:8081: connect: connection refused" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.496002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" event={"ID":"9d5cb621-e571-4a9c-b564-f9ce5b07295f","Type":"ContainerStarted","Data":"b4764c93c255effc3e0e0a150c64f5639dbd4ea7878a3ead911a2e4bbc10161f"} Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.498214 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" event={"ID":"def2ec98-a720-44e2-b7ae-e4b917a073e0","Type":"ContainerStarted","Data":"cb9d0977cdcba482c969bf0dc607ef982391efaa96cec9ecff204d6f612d7e8c"} Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.504590 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" event={"ID":"43b00acb-5aa9-4e89-8eaf-43217205623b","Type":"ContainerStarted","Data":"b6e0ae3cf10ec049f5878498e3dbd987c7f58ee06ac7644ac4504714d22f45c1"} Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.548735 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" podStartSLOduration=3.179857941 podStartE2EDuration="17.548708993s" podCreationTimestamp="2026-02-19 10:01:07 +0000 UTC" firstStartedPulling="2026-02-19 10:01:09.338215738 +0000 UTC m=+6012.081873187" lastFinishedPulling="2026-02-19 10:01:23.70706679 +0000 UTC m=+6026.450724239" observedRunningTime="2026-02-19 10:01:24.528045335 +0000 UTC m=+6027.271702794" watchObservedRunningTime="2026-02-19 10:01:24.548708993 +0000 UTC m=+6027.292366442" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.558847 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm" podStartSLOduration=2.456067644 podStartE2EDuration="17.558822161s" podCreationTimestamp="2026-02-19 10:01:07 +0000 UTC" firstStartedPulling="2026-02-19 10:01:08.633320186 +0000 UTC m=+6011.376977635" lastFinishedPulling="2026-02-19 10:01:23.736074703 +0000 UTC m=+6026.479732152" observedRunningTime="2026-02-19 10:01:24.552184268 +0000 UTC m=+6027.295841737" watchObservedRunningTime="2026-02-19 10:01:24.558822161 +0000 UTC m=+6027.302479610" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.580944 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" podStartSLOduration=2.757309762 podStartE2EDuration="17.580916694s" podCreationTimestamp="2026-02-19 10:01:07 +0000 UTC" firstStartedPulling="2026-02-19 10:01:08.989982556 +0000 UTC m=+6011.733640005" lastFinishedPulling="2026-02-19 10:01:23.813589488 +0000 UTC m=+6026.557246937" observedRunningTime="2026-02-19 10:01:24.575108112 +0000 UTC m=+6027.318765591" watchObservedRunningTime="2026-02-19 10:01:24.580916694 +0000 UTC m=+6027.324574153" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.612538 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mjrgp" podStartSLOduration=2.236963137 podStartE2EDuration="17.612512571s" podCreationTimestamp="2026-02-19 10:01:07 +0000 UTC" firstStartedPulling="2026-02-19 10:01:08.331715091 +0000 UTC m=+6011.075372530" lastFinishedPulling="2026-02-19 10:01:23.707264515 +0000 UTC m=+6026.450921964" observedRunningTime="2026-02-19 10:01:24.607024996 +0000 UTC m=+6027.350682445" watchObservedRunningTime="2026-02-19 10:01:24.612512571 +0000 UTC m=+6027.356170020" Feb 19 10:01:24 crc kubenswrapper[4780]: I0219 10:01:24.639146 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g" podStartSLOduration=2.715309178 podStartE2EDuration="17.639113405s" podCreationTimestamp="2026-02-19 10:01:07 +0000 UTC" firstStartedPulling="2026-02-19 10:01:08.783357165 +0000 UTC m=+6011.527014614" lastFinishedPulling="2026-02-19 10:01:23.707161382 +0000 UTC m=+6026.450818841" observedRunningTime="2026-02-19 10:01:24.631350964 +0000 UTC m=+6027.375008423" watchObservedRunningTime="2026-02-19 10:01:24.639113405 +0000 UTC m=+6027.382770854" Feb 19 10:01:25 crc kubenswrapper[4780]: I0219 10:01:25.533441 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4fxhv" Feb 19 10:01:26 crc kubenswrapper[4780]: I0219 10:01:26.887733 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:26 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:26 crc kubenswrapper[4780]: > Feb 19 10:01:27 crc kubenswrapper[4780]: I0219 10:01:27.078057 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hq2qh"] Feb 19 10:01:27 crc kubenswrapper[4780]: I0219 10:01:27.096808 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hq2qh"] Feb 19 10:01:27 crc kubenswrapper[4780]: I0219 10:01:27.969244 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f356f3-3144-4be1-9276-d9d1554d0ea1" path="/var/lib/kubelet/pods/a1f356f3-3144-4be1-9276-d9d1554d0ea1/volumes" Feb 19 10:01:36 crc kubenswrapper[4780]: I0219 10:01:36.879057 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:36 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:36 crc kubenswrapper[4780]: > Feb 19 10:01:38 crc kubenswrapper[4780]: I0219 10:01:38.164580 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-nx8q8" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.718911 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.719980 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" containerName="openstackclient" containerID="cri-o://fa561b7eceff1048752459654dc4ea5fa063dff45ae0ae4e42f3de647a85e78a" gracePeriod=2 Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.732339 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.798216 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 10:01:41 crc kubenswrapper[4780]: E0219 10:01:41.798829 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b6fd2d-36a9-4498-a369-266c0b665a28" containerName="keystone-cron" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.798857 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b6fd2d-36a9-4498-a369-266c0b665a28" containerName="keystone-cron" Feb 19 10:01:41 crc kubenswrapper[4780]: E0219 10:01:41.798905 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" containerName="openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.798913 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" containerName="openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.799155 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b6fd2d-36a9-4498-a369-266c0b665a28" containerName="keystone-cron" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.799180 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" containerName="openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.806312 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.811846 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9r9x\" (UniqueName: \"kubernetes.io/projected/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-kube-api-access-x9r9x\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.811936 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config-secret\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.812049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.813760 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" podUID="09bafe4c-f2c6-4736-b731-c3ce9f68f18f" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.825834 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.917968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config-secret\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.918022 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.918186 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9r9x\" (UniqueName: \"kubernetes.io/projected/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-kube-api-access-x9r9x\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.921617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.964079 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-openstack-config-secret\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:41 crc kubenswrapper[4780]: I0219 10:01:41.967873 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9r9x\" (UniqueName: \"kubernetes.io/projected/09bafe4c-f2c6-4736-b731-c3ce9f68f18f-kube-api-access-x9r9x\") pod \"openstackclient\" (UID: \"09bafe4c-f2c6-4736-b731-c3ce9f68f18f\") " pod="openstack/openstackclient" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.038582 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.050601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.058153 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-b8knh" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.072538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.124302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsnw\" (UniqueName: \"kubernetes.io/projected/9d263497-7e86-4475-a86e-c49fa0b57cf3-kube-api-access-jxsnw\") pod \"kube-state-metrics-0\" (UID: \"9d263497-7e86-4475-a86e-c49fa0b57cf3\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.127952 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.226119 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxsnw\" (UniqueName: \"kubernetes.io/projected/9d263497-7e86-4475-a86e-c49fa0b57cf3-kube-api-access-jxsnw\") pod \"kube-state-metrics-0\" (UID: \"9d263497-7e86-4475-a86e-c49fa0b57cf3\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.295532 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxsnw\" (UniqueName: \"kubernetes.io/projected/9d263497-7e86-4475-a86e-c49fa0b57cf3-kube-api-access-jxsnw\") pod \"kube-state-metrics-0\" (UID: \"9d263497-7e86-4475-a86e-c49fa0b57cf3\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:42 crc kubenswrapper[4780]: I0219 10:01:42.374019 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.127163 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.129828 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.133544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.133891 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.134063 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.138435 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.141731 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-bvbp2" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.167993 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200329 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200378 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.200527 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hv9m\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-kube-api-access-4hv9m\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308648 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hv9m\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-kube-api-access-4hv9m\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308793 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308851 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308913 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.308973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.318823 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.321971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.322296 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/30ad128e-0986-4944-8bdb-ae191d72c28d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.325168 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.330625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ad128e-0986-4944-8bdb-ae191d72c28d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.331575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.349567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hv9m\" (UniqueName: \"kubernetes.io/projected/30ad128e-0986-4944-8bdb-ae191d72c28d-kube-api-access-4hv9m\") pod \"alertmanager-metric-storage-0\" (UID: \"30ad128e-0986-4944-8bdb-ae191d72c28d\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.461082 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.512814 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.550066 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.694140 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.697271 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701400 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701518 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701647 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701729 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ctcft" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.701910 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.702086 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.702715 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.716332 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831441 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrzl\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-kube-api-access-wbrzl\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831852 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb98989b-cdda-4f12-a34f-49d975270871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb98989b-cdda-4f12-a34f-49d975270871\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831887 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.831911 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a321740f-f577-4e8c-816d-95b714f098c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.862875 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d263497-7e86-4475-a86e-c49fa0b57cf3","Type":"ContainerStarted","Data":"13260612245db9b8e652cbc35a8eecb448a4a2098ee1de9eabb39b52e8269217"} Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.866712 4780 generic.go:334] "Generic (PLEG): container finished" podID="b807c707-a369-4e3a-bfc1-0264f1bcf289" containerID="fa561b7eceff1048752459654dc4ea5fa063dff45ae0ae4e42f3de647a85e78a" exitCode=137 Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.870103 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"09bafe4c-f2c6-4736-b731-c3ce9f68f18f","Type":"ContainerStarted","Data":"5ca2fe57e0fc220f460c3678713d8f3f21ed755ef41fe67c58b7513a80a46ece"} Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934417 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a321740f-f577-4e8c-816d-95b714f098c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934790 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbrzl\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-kube-api-access-wbrzl\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934890 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934933 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb98989b-cdda-4f12-a34f-49d975270871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb98989b-cdda-4f12-a34f-49d975270871\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.934995 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.935801 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.936683 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.937371 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a321740f-f577-4e8c-816d-95b714f098c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.944668 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.946431 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.946544 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.946597 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb98989b-cdda-4f12-a34f-49d975270871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb98989b-cdda-4f12-a34f-49d975270871\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/62ec57b632bd2be32801ab7ebb1a93e4d7f53112030649300509a51e7bd172c0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.951362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a321740f-f577-4e8c-816d-95b714f098c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.955406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a321740f-f577-4e8c-816d-95b714f098c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.964210 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:43 crc kubenswrapper[4780]: I0219 10:01:43.968083 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbrzl\" (UniqueName: \"kubernetes.io/projected/a321740f-f577-4e8c-816d-95b714f098c7-kube-api-access-wbrzl\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.010228 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb98989b-cdda-4f12-a34f-49d975270871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb98989b-cdda-4f12-a34f-49d975270871\") pod \"prometheus-metric-storage-0\" (UID: \"a321740f-f577-4e8c-816d-95b714f098c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.110942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.244158 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.263909 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.342872 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm6n\" (UniqueName: \"kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n\") pod \"b807c707-a369-4e3a-bfc1-0264f1bcf289\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.342935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret\") pod \"b807c707-a369-4e3a-bfc1-0264f1bcf289\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.343030 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config\") pod \"b807c707-a369-4e3a-bfc1-0264f1bcf289\" (UID: \"b807c707-a369-4e3a-bfc1-0264f1bcf289\") " Feb 19 10:01:44 crc kubenswrapper[4780]: I0219 10:01:44.355676 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n" (OuterVolumeSpecName: "kube-api-access-tzm6n") pod "b807c707-a369-4e3a-bfc1-0264f1bcf289" (UID: "b807c707-a369-4e3a-bfc1-0264f1bcf289"). InnerVolumeSpecName "kube-api-access-tzm6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.400472 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b807c707-a369-4e3a-bfc1-0264f1bcf289" (UID: "b807c707-a369-4e3a-bfc1-0264f1bcf289"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.432864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b807c707-a369-4e3a-bfc1-0264f1bcf289" (UID: "b807c707-a369-4e3a-bfc1-0264f1bcf289"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.449170 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.449208 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzm6n\" (UniqueName: \"kubernetes.io/projected/b807c707-a369-4e3a-bfc1-0264f1bcf289-kube-api-access-tzm6n\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.449221 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b807c707-a369-4e3a-bfc1-0264f1bcf289-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.702809 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:45 crc kubenswrapper[4780]: W0219 10:01:44.708061 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda321740f_f577_4e8c_816d_95b714f098c7.slice/crio-174ffb6dd7dd79d6782e64403cb991cf0d78be5543892ae7f27bf46710c583b0 WatchSource:0}: Error finding container 174ffb6dd7dd79d6782e64403cb991cf0d78be5543892ae7f27bf46710c583b0: Status 404 returned error can't find the container with id 174ffb6dd7dd79d6782e64403cb991cf0d78be5543892ae7f27bf46710c583b0 Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.882546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerStarted","Data":"174ffb6dd7dd79d6782e64403cb991cf0d78be5543892ae7f27bf46710c583b0"} Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.884454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d263497-7e86-4475-a86e-c49fa0b57cf3","Type":"ContainerStarted","Data":"a85b3c1922a5a06cfbaac1a192bd4bd5082db520387a2ccab1e91d9271f9ad53"} Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.884548 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.888515 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.889195 4780 scope.go:117] "RemoveContainer" containerID="fa561b7eceff1048752459654dc4ea5fa063dff45ae0ae4e42f3de647a85e78a" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.890959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"09bafe4c-f2c6-4736-b731-c3ce9f68f18f","Type":"ContainerStarted","Data":"08350323c7c74b33c41c8bc1f7fbd814806bf6843a4fcaa737c8446ebb669075"} Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.892168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"30ad128e-0986-4944-8bdb-ae191d72c28d","Type":"ContainerStarted","Data":"4d8dc5b2fcade421a29b69c440d3f535b71d2fbc3ef5c6ea28f14139469774a0"} Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.916326 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.37712155 podStartE2EDuration="3.916306657s" podCreationTimestamp="2026-02-19 10:01:41 +0000 UTC" firstStartedPulling="2026-02-19 10:01:43.600898825 +0000 UTC m=+6046.344556274" lastFinishedPulling="2026-02-19 10:01:44.140083932 +0000 UTC m=+6046.883741381" observedRunningTime="2026-02-19 10:01:44.900784575 +0000 UTC m=+6047.644442024" watchObservedRunningTime="2026-02-19 10:01:44.916306657 +0000 UTC m=+6047.659964106" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.918980 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.918968202 podStartE2EDuration="3.918968202s" podCreationTimestamp="2026-02-19 10:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:44.915490927 +0000 UTC m=+6047.659148376" watchObservedRunningTime="2026-02-19 10:01:44.918968202 +0000 UTC m=+6047.662625651" Feb 19 10:01:45 crc kubenswrapper[4780]: I0219 10:01:44.919870 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" podUID="09bafe4c-f2c6-4736-b731-c3ce9f68f18f" Feb 19 10:01:46 crc kubenswrapper[4780]: I0219 10:01:46.010285 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b807c707-a369-4e3a-bfc1-0264f1bcf289" path="/var/lib/kubelet/pods/b807c707-a369-4e3a-bfc1-0264f1bcf289/volumes" Feb 19 10:01:46 crc kubenswrapper[4780]: I0219 10:01:46.101956 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxgjt"] Feb 19 10:01:46 crc kubenswrapper[4780]: I0219 10:01:46.142902 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxgjt"] Feb 19 10:01:46 crc kubenswrapper[4780]: I0219 10:01:46.879385 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:46 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:46 crc kubenswrapper[4780]: > Feb 19 10:01:47 crc kubenswrapper[4780]: I0219 10:01:47.034139 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cthrb"] Feb 19 10:01:47 crc kubenswrapper[4780]: I0219 10:01:47.049988 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cthrb"] Feb 19 10:01:47 crc kubenswrapper[4780]: I0219 10:01:47.953878 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f78c83f-0497-4f37-bd31-e73228b93e78" path="/var/lib/kubelet/pods/4f78c83f-0497-4f37-bd31-e73228b93e78/volumes" Feb 19 10:01:47 crc kubenswrapper[4780]: I0219 10:01:47.957153 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c9b76e-da33-4c48-8fb6-2b2c461007c7" path="/var/lib/kubelet/pods/b3c9b76e-da33-4c48-8fb6-2b2c461007c7/volumes" Feb 19 10:01:52 crc kubenswrapper[4780]: I0219 10:01:52.014604 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"30ad128e-0986-4944-8bdb-ae191d72c28d","Type":"ContainerStarted","Data":"2185e3157266ad8ee0315edcad9ad08c863eaad6618b7bc20d440e1defec6222"} Feb 19 10:01:52 crc kubenswrapper[4780]: I0219 10:01:52.017568 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerStarted","Data":"c4074ca24a6593905d7d452800c6b3f8216f02ea3e23d71a700c7f624716f0f4"} Feb 19 10:01:52 crc kubenswrapper[4780]: I0219 10:01:52.384100 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:01:55 crc kubenswrapper[4780]: I0219 10:01:55.903465 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:01:55 crc kubenswrapper[4780]: I0219 10:01:55.980286 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:01:56 crc kubenswrapper[4780]: I0219 10:01:56.156192 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.074277 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pz6lq" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" containerID="cri-o://f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e" gracePeriod=2 Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.796528 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.912162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities\") pod \"c6df687d-fedc-400e-bf1d-50fb5be4033c\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.912469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content\") pod \"c6df687d-fedc-400e-bf1d-50fb5be4033c\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.912608 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55zw\" (UniqueName: \"kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw\") pod \"c6df687d-fedc-400e-bf1d-50fb5be4033c\" (UID: \"c6df687d-fedc-400e-bf1d-50fb5be4033c\") " Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.914285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities" (OuterVolumeSpecName: "utilities") pod "c6df687d-fedc-400e-bf1d-50fb5be4033c" (UID: "c6df687d-fedc-400e-bf1d-50fb5be4033c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:57 crc kubenswrapper[4780]: I0219 10:01:57.924528 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw" (OuterVolumeSpecName: "kube-api-access-r55zw") pod "c6df687d-fedc-400e-bf1d-50fb5be4033c" (UID: "c6df687d-fedc-400e-bf1d-50fb5be4033c"). InnerVolumeSpecName "kube-api-access-r55zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.075585 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55zw\" (UniqueName: \"kubernetes.io/projected/c6df687d-fedc-400e-bf1d-50fb5be4033c-kube-api-access-r55zw\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.090248 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.145468 4780 generic.go:334] "Generic (PLEG): container finished" podID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerID="f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e" exitCode=0 Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.145544 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerDied","Data":"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e"} Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.145593 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz6lq" event={"ID":"c6df687d-fedc-400e-bf1d-50fb5be4033c","Type":"ContainerDied","Data":"c3127d219274a031d5d4c2b0502836897fe8c1afe2385515d3e39e358a2fb96e"} Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.145619 4780 scope.go:117] "RemoveContainer" containerID="f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.145823 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz6lq" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.170263 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6df687d-fedc-400e-bf1d-50fb5be4033c" (UID: "c6df687d-fedc-400e-bf1d-50fb5be4033c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.174333 4780 scope.go:117] "RemoveContainer" containerID="ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.192054 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6df687d-fedc-400e-bf1d-50fb5be4033c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.200820 4780 scope.go:117] "RemoveContainer" containerID="a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.244806 4780 scope.go:117] "RemoveContainer" containerID="f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e" Feb 19 10:01:58 crc kubenswrapper[4780]: E0219 10:01:58.245413 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e\": container with ID starting with f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e not found: ID does not exist" containerID="f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.245466 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e"} err="failed to get container status \"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e\": rpc error: code = NotFound desc = could not find container \"f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e\": container with ID starting with f05115e70c7b5c244c3e8daa4c2840d3e3c497ec634aa89d0b727a0313a2fb0e not found: ID does not exist" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.245498 4780 scope.go:117] "RemoveContainer" containerID="ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8" Feb 19 10:01:58 crc kubenswrapper[4780]: E0219 10:01:58.245801 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8\": container with ID starting with ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8 not found: ID does not exist" containerID="ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.245832 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8"} err="failed to get container status \"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8\": rpc error: code = NotFound desc = could not find container \"ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8\": container with ID starting with ef4962a2f7a6282d359a530a96a8382443d05dbcdd375eea8339dc3f0ab8a2d8 not found: ID does not exist" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.245847 4780 scope.go:117] "RemoveContainer" containerID="a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308" Feb 19 10:01:58 crc kubenswrapper[4780]: E0219 10:01:58.246409 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308\": container with ID starting with a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308 not found: ID does not exist" containerID="a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.246468 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308"} err="failed to get container status \"a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308\": rpc error: code = NotFound desc = could not find container \"a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308\": container with ID starting with a925fdede2c517b67a066742844ca37707f97507f99f353f57846b0db09cc308 not found: ID does not exist" Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.514604 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:01:58 crc kubenswrapper[4780]: I0219 10:01:58.535458 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pz6lq"] Feb 19 10:01:59 crc kubenswrapper[4780]: I0219 10:01:59.161101 4780 generic.go:334] "Generic (PLEG): container finished" podID="30ad128e-0986-4944-8bdb-ae191d72c28d" containerID="2185e3157266ad8ee0315edcad9ad08c863eaad6618b7bc20d440e1defec6222" exitCode=0 Feb 19 10:01:59 crc kubenswrapper[4780]: I0219 10:01:59.161492 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"30ad128e-0986-4944-8bdb-ae191d72c28d","Type":"ContainerDied","Data":"2185e3157266ad8ee0315edcad9ad08c863eaad6618b7bc20d440e1defec6222"} Feb 19 10:01:59 crc kubenswrapper[4780]: I0219 10:01:59.959528 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" path="/var/lib/kubelet/pods/c6df687d-fedc-400e-bf1d-50fb5be4033c/volumes" Feb 19 10:02:00 crc kubenswrapper[4780]: I0219 10:02:00.043670 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqbvj"] Feb 19 10:02:00 crc kubenswrapper[4780]: I0219 10:02:00.055960 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqbvj"] Feb 19 10:02:00 crc kubenswrapper[4780]: I0219 10:02:00.172025 4780 generic.go:334] "Generic (PLEG): container finished" podID="a321740f-f577-4e8c-816d-95b714f098c7" containerID="c4074ca24a6593905d7d452800c6b3f8216f02ea3e23d71a700c7f624716f0f4" exitCode=0 Feb 19 10:02:00 crc kubenswrapper[4780]: I0219 10:02:00.172071 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerDied","Data":"c4074ca24a6593905d7d452800c6b3f8216f02ea3e23d71a700c7f624716f0f4"} Feb 19 10:02:01 crc kubenswrapper[4780]: I0219 10:02:01.954146 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3da8aec-25ad-4017-bb05-6b87fa4f359a" path="/var/lib/kubelet/pods/b3da8aec-25ad-4017-bb05-6b87fa4f359a/volumes" Feb 19 10:02:02 crc kubenswrapper[4780]: I0219 10:02:02.201018 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"30ad128e-0986-4944-8bdb-ae191d72c28d","Type":"ContainerStarted","Data":"ae629220a533597999961e21daee394cc6c2b494fe3ac682599d3050c92a6656"} Feb 19 10:02:05 crc kubenswrapper[4780]: I0219 10:02:05.238150 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"30ad128e-0986-4944-8bdb-ae191d72c28d","Type":"ContainerStarted","Data":"d9de6e1af2d8a5b284ffb157366d4d7a5d1b5048e2f61c72669512025088b09d"} Feb 19 10:02:05 crc kubenswrapper[4780]: I0219 10:02:05.238758 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:02:05 crc kubenswrapper[4780]: I0219 10:02:05.247016 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:02:05 crc kubenswrapper[4780]: I0219 10:02:05.327840 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.534408093 podStartE2EDuration="22.327818598s" podCreationTimestamp="2026-02-19 10:01:43 +0000 UTC" firstStartedPulling="2026-02-19 10:01:44.333097557 +0000 UTC m=+6047.076755016" lastFinishedPulling="2026-02-19 10:02:01.126508072 +0000 UTC m=+6063.870165521" observedRunningTime="2026-02-19 10:02:05.271242157 +0000 UTC m=+6068.014899666" watchObservedRunningTime="2026-02-19 10:02:05.327818598 +0000 UTC m=+6068.071476047" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.269953 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerStarted","Data":"5f661162fdc277753cb2fafc5782cee8d590fa1164fd004151f7ef47d9b8fdd6"} Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.581955 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:06 crc kubenswrapper[4780]: E0219 10:02:06.582427 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="extract-utilities" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.582448 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="extract-utilities" Feb 19 10:02:06 crc kubenswrapper[4780]: E0219 10:02:06.582467 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.582474 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" Feb 19 10:02:06 crc kubenswrapper[4780]: E0219 10:02:06.582516 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="extract-content" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.582523 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="extract-content" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.582727 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6df687d-fedc-400e-bf1d-50fb5be4033c" containerName="registry-server" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.587278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.619550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.710348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.710711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.710977 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjz6l\" (UniqueName: \"kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.813041 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjz6l\" (UniqueName: \"kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.813585 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.814081 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.814348 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.814653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.837597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjz6l\" (UniqueName: \"kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l\") pod \"redhat-marketplace-cv4r6\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:06 crc kubenswrapper[4780]: I0219 10:02:06.914718 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:07 crc kubenswrapper[4780]: I0219 10:02:07.447942 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:07 crc kubenswrapper[4780]: W0219 10:02:07.454684 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf27f1c14_3ccc_4048_8686_ea91d53e88d5.slice/crio-57ff7500de8bfdcbeceb825e549a322c2e0e96656c57f96ffcec6f9d8d0ab5f7 WatchSource:0}: Error finding container 57ff7500de8bfdcbeceb825e549a322c2e0e96656c57f96ffcec6f9d8d0ab5f7: Status 404 returned error can't find the container with id 57ff7500de8bfdcbeceb825e549a322c2e0e96656c57f96ffcec6f9d8d0ab5f7 Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.219505 4780 scope.go:117] "RemoveContainer" containerID="b793c8c9e27fe989bf8d97b8a0f99db3119513ba03cada87cc77eed99f78f0d8" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.244997 4780 scope.go:117] "RemoveContainer" containerID="418bc78d66b7dad6743fa5f4f31a15c61ce668dc6012da205bc64c332f66e444" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.311881 4780 generic.go:334] "Generic (PLEG): container finished" podID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerID="46893bded35072c8721974ecbbb4b591634c12b1d4bd5b778c010fc0bb172b27" exitCode=0 Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.311945 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerDied","Data":"46893bded35072c8721974ecbbb4b591634c12b1d4bd5b778c010fc0bb172b27"} Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.311999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerStarted","Data":"57ff7500de8bfdcbeceb825e549a322c2e0e96656c57f96ffcec6f9d8d0ab5f7"} Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.314788 4780 scope.go:117] "RemoveContainer" containerID="0124818e1adf352f65b325caa32f20892863a99980514839c5d57d773c0afed9" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.368054 4780 scope.go:117] "RemoveContainer" containerID="09c807cb2c88b53ca9cfef415d40ce92a2eecbbbaa908a706ceae74005d48a69" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.544236 4780 scope.go:117] "RemoveContainer" containerID="ece8e0f13a76f2cd79f7ab1fcd7bc4abc401d670d2dadd0a90c67606fdb993ac" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.586623 4780 scope.go:117] "RemoveContainer" containerID="f380bee83395e7e262c55ddf74173e8e169d0995f5839b82b1c3b416cd0aef96" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.658932 4780 scope.go:117] "RemoveContainer" containerID="ca6b6f0620dc49912cda17c2f631d21794cca1d4557cbae385aae4af38370c8f" Feb 19 10:02:08 crc kubenswrapper[4780]: I0219 10:02:08.989371 4780 scope.go:117] "RemoveContainer" containerID="ed69dbd290d7b2494ec99ff94a8bd7e09a117cf1d33df9362274a40e07df5086" Feb 19 10:02:09 crc kubenswrapper[4780]: I0219 10:02:09.019082 4780 scope.go:117] "RemoveContainer" containerID="6d58c596c96e2a440644c73cbb63b232ba2e0a158ce65e68922ee256360f667b" Feb 19 10:02:09 crc kubenswrapper[4780]: I0219 10:02:09.079784 4780 scope.go:117] "RemoveContainer" containerID="3711dda938dee06ddd9ae46f83b57c758dd7fccef8194dd1d99fcb8c66adf618" Feb 19 10:02:09 crc kubenswrapper[4780]: I0219 10:02:09.112090 4780 scope.go:117] "RemoveContainer" containerID="71fab456b287f7f8d669e6c81599f27688415d408e2547af9a875447cd510acd" Feb 19 10:02:09 crc kubenswrapper[4780]: I0219 10:02:09.149883 4780 scope.go:117] "RemoveContainer" containerID="7eda8e95a36077a313a814115829f0df28ffa033671f6302114e1fa6599254ea" Feb 19 10:02:10 crc kubenswrapper[4780]: I0219 10:02:10.359511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerStarted","Data":"5b9d99e284400149193896c07c07a911bc467bfee0a6cbf5f8ce31057c76c28e"} Feb 19 10:02:11 crc kubenswrapper[4780]: I0219 10:02:11.371942 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerStarted","Data":"17fd654e2c9cf20895dd0106745ad7a86c9dcd9d979155206d24038253dbca98"} Feb 19 10:02:11 crc kubenswrapper[4780]: I0219 10:02:11.375064 4780 generic.go:334] "Generic (PLEG): container finished" podID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerID="5b9d99e284400149193896c07c07a911bc467bfee0a6cbf5f8ce31057c76c28e" exitCode=0 Feb 19 10:02:11 crc kubenswrapper[4780]: I0219 10:02:11.375159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerDied","Data":"5b9d99e284400149193896c07c07a911bc467bfee0a6cbf5f8ce31057c76c28e"} Feb 19 10:02:12 crc kubenswrapper[4780]: I0219 10:02:12.388843 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerStarted","Data":"4c8fadd70762df249ce18fae5ad4e57b327d07fff3f60fb61c66693b4e2c82f6"} Feb 19 10:02:12 crc kubenswrapper[4780]: I0219 10:02:12.418255 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cv4r6" podStartSLOduration=2.943189008 podStartE2EDuration="6.418231468s" podCreationTimestamp="2026-02-19 10:02:06 +0000 UTC" firstStartedPulling="2026-02-19 10:02:08.31447948 +0000 UTC m=+6071.058136929" lastFinishedPulling="2026-02-19 10:02:11.78952194 +0000 UTC m=+6074.533179389" observedRunningTime="2026-02-19 10:02:12.411795559 +0000 UTC m=+6075.155453008" watchObservedRunningTime="2026-02-19 10:02:12.418231468 +0000 UTC m=+6075.161888917" Feb 19 10:02:14 crc kubenswrapper[4780]: I0219 10:02:14.423481 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a321740f-f577-4e8c-816d-95b714f098c7","Type":"ContainerStarted","Data":"e5d9f712dd8a60477aadd2a011f228aec032a96f654c5eb134fc6d88d3ad9b8e"} Feb 19 10:02:14 crc kubenswrapper[4780]: I0219 10:02:14.466224 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.348936993 podStartE2EDuration="32.46611968s" podCreationTimestamp="2026-02-19 10:01:42 +0000 UTC" firstStartedPulling="2026-02-19 10:01:44.710890536 +0000 UTC m=+6047.454547995" lastFinishedPulling="2026-02-19 10:02:13.828073223 +0000 UTC m=+6076.571730682" observedRunningTime="2026-02-19 10:02:14.458346859 +0000 UTC m=+6077.202004338" watchObservedRunningTime="2026-02-19 10:02:14.46611968 +0000 UTC m=+6077.209777139" Feb 19 10:02:16 crc kubenswrapper[4780]: I0219 10:02:16.915058 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:16 crc kubenswrapper[4780]: I0219 10:02:16.915687 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:17 crc kubenswrapper[4780]: I0219 10:02:17.006403 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:17 crc kubenswrapper[4780]: I0219 10:02:17.504887 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:17 crc kubenswrapper[4780]: I0219 10:02:17.564405 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:19 crc kubenswrapper[4780]: I0219 10:02:19.111965 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:19 crc kubenswrapper[4780]: I0219 10:02:19.480529 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cv4r6" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="registry-server" containerID="cri-o://4c8fadd70762df249ce18fae5ad4e57b327d07fff3f60fb61c66693b4e2c82f6" gracePeriod=2 Feb 19 10:02:20 crc kubenswrapper[4780]: I0219 10:02:20.493694 4780 generic.go:334] "Generic (PLEG): container finished" podID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerID="4c8fadd70762df249ce18fae5ad4e57b327d07fff3f60fb61c66693b4e2c82f6" exitCode=0 Feb 19 10:02:20 crc kubenswrapper[4780]: I0219 10:02:20.493778 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerDied","Data":"4c8fadd70762df249ce18fae5ad4e57b327d07fff3f60fb61c66693b4e2c82f6"} Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.389598 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.474019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content\") pod \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.474206 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities\") pod \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.474270 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjz6l\" (UniqueName: \"kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l\") pod \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\" (UID: \"f27f1c14-3ccc-4048-8686-ea91d53e88d5\") " Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.475220 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities" (OuterVolumeSpecName: "utilities") pod "f27f1c14-3ccc-4048-8686-ea91d53e88d5" (UID: "f27f1c14-3ccc-4048-8686-ea91d53e88d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.480513 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l" (OuterVolumeSpecName: "kube-api-access-wjz6l") pod "f27f1c14-3ccc-4048-8686-ea91d53e88d5" (UID: "f27f1c14-3ccc-4048-8686-ea91d53e88d5"). InnerVolumeSpecName "kube-api-access-wjz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.537385 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv4r6" event={"ID":"f27f1c14-3ccc-4048-8686-ea91d53e88d5","Type":"ContainerDied","Data":"57ff7500de8bfdcbeceb825e549a322c2e0e96656c57f96ffcec6f9d8d0ab5f7"} Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.538486 4780 scope.go:117] "RemoveContainer" containerID="4c8fadd70762df249ce18fae5ad4e57b327d07fff3f60fb61c66693b4e2c82f6" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.538776 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv4r6" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.556307 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f27f1c14-3ccc-4048-8686-ea91d53e88d5" (UID: "f27f1c14-3ccc-4048-8686-ea91d53e88d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.578905 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.578941 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjz6l\" (UniqueName: \"kubernetes.io/projected/f27f1c14-3ccc-4048-8686-ea91d53e88d5-kube-api-access-wjz6l\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.578951 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f1c14-3ccc-4048-8686-ea91d53e88d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.583931 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:21 crc kubenswrapper[4780]: E0219 10:02:21.584562 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="registry-server" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.584586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="registry-server" Feb 19 10:02:21 crc kubenswrapper[4780]: E0219 10:02:21.584611 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="extract-utilities" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.584619 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="extract-utilities" Feb 19 10:02:21 crc kubenswrapper[4780]: E0219 10:02:21.584647 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="extract-content" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.584653 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="extract-content" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.584928 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" containerName="registry-server" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.588819 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.595337 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.595539 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.598148 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.605911 4780 scope.go:117] "RemoveContainer" containerID="5b9d99e284400149193896c07c07a911bc467bfee0a6cbf5f8ce31057c76c28e" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.634396 4780 scope.go:117] "RemoveContainer" containerID="46893bded35072c8721974ecbbb4b591634c12b1d4bd5b778c010fc0bb172b27" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.682278 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.682324 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.682367 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.682406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.683006 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.683151 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.683279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7jk\" (UniqueName: \"kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785589 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785687 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.785882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7jk\" (UniqueName: \"kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.787756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.787819 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.791158 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.791944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.794459 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.796314 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.803061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7jk\" (UniqueName: \"kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk\") pod \"ceilometer-0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.893143 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.906503 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv4r6"] Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.913661 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:21 crc kubenswrapper[4780]: I0219 10:02:21.964868 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27f1c14-3ccc-4048-8686-ea91d53e88d5" path="/var/lib/kubelet/pods/f27f1c14-3ccc-4048-8686-ea91d53e88d5/volumes" Feb 19 10:02:22 crc kubenswrapper[4780]: I0219 10:02:22.421114 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:22 crc kubenswrapper[4780]: W0219 10:02:22.438025 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice/crio-8ce0f5c1d42f56b893743a50b856a4cca8f66578d399ae84c36c9fe0b19a1f40 WatchSource:0}: Error finding container 8ce0f5c1d42f56b893743a50b856a4cca8f66578d399ae84c36c9fe0b19a1f40: Status 404 returned error can't find the container with id 8ce0f5c1d42f56b893743a50b856a4cca8f66578d399ae84c36c9fe0b19a1f40 Feb 19 10:02:22 crc kubenswrapper[4780]: I0219 10:02:22.549745 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerStarted","Data":"8ce0f5c1d42f56b893743a50b856a4cca8f66578d399ae84c36c9fe0b19a1f40"} Feb 19 10:02:24 crc kubenswrapper[4780]: I0219 10:02:24.573792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerStarted","Data":"fb35ed8e30ddc39cc5750abdeecff0a722ebd32f632c440f2f3295cd9021ee78"} Feb 19 10:02:24 crc kubenswrapper[4780]: I0219 10:02:24.574388 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerStarted","Data":"0432ac2753e983970d66683c4772de83662d57b42f8f24f2f5aa30132797e2d3"} Feb 19 10:02:25 crc kubenswrapper[4780]: I0219 10:02:25.585244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerStarted","Data":"0c404cdd6ab1c590f48eb703f3ed6a198d516a2ffa454fac4e5ca670f2021438"} Feb 19 10:02:26 crc kubenswrapper[4780]: I0219 10:02:26.595645 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerStarted","Data":"d020f2920fa0451328833d279e83d1f4eb288ada42568d5c225f9444569af3f3"} Feb 19 10:02:27 crc kubenswrapper[4780]: I0219 10:02:27.609234 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:02:27 crc kubenswrapper[4780]: I0219 10:02:27.633344 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.744514276 podStartE2EDuration="6.63332512s" podCreationTimestamp="2026-02-19 10:02:21 +0000 UTC" firstStartedPulling="2026-02-19 10:02:22.440761391 +0000 UTC m=+6085.184418830" lastFinishedPulling="2026-02-19 10:02:26.329572225 +0000 UTC m=+6089.073229674" observedRunningTime="2026-02-19 10:02:27.631973617 +0000 UTC m=+6090.375631076" watchObservedRunningTime="2026-02-19 10:02:27.63332512 +0000 UTC m=+6090.376982579" Feb 19 10:02:29 crc kubenswrapper[4780]: I0219 10:02:29.111549 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:29 crc kubenswrapper[4780]: I0219 10:02:29.116608 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:29 crc kubenswrapper[4780]: I0219 10:02:29.131462 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.330088 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-f67lq"] Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.332783 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.346743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f67lq"] Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.433890 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-353b-account-create-update-smsfg"] Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.435784 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.444691 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.444870 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-353b-account-create-update-smsfg"] Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.494297 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rfz\" (UniqueName: \"kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.494474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.596757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26rfz\" (UniqueName: \"kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.596829 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2cqb\" (UniqueName: \"kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.596879 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.596900 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.597877 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.621009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26rfz\" (UniqueName: \"kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz\") pod \"aodh-db-create-f67lq\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.654996 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.699790 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.700335 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2cqb\" (UniqueName: \"kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.701671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.724741 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2cqb\" (UniqueName: \"kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb\") pod \"aodh-353b-account-create-update-smsfg\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:33 crc kubenswrapper[4780]: I0219 10:02:33.759835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.300812 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f67lq"] Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.377426 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-353b-account-create-update-smsfg"] Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.694413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f67lq" event={"ID":"249c7d5e-f058-4d69-8e12-69472cf9c8b0","Type":"ContainerStarted","Data":"7a673e9211bdd465bdf90dc5e8f198c812136e700e79b26380dc4a1da51eaac8"} Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.694982 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f67lq" event={"ID":"249c7d5e-f058-4d69-8e12-69472cf9c8b0","Type":"ContainerStarted","Data":"21c2d7a621f4e2c4e677ae555abc3107fbb09d77ed30aa067236bdce53c8ee3d"} Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.699134 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-353b-account-create-update-smsfg" event={"ID":"a96e2eaa-7162-49ea-adf8-66cc39516d9c","Type":"ContainerStarted","Data":"eebbddba5141dd70ad520909dc034d9e96b2cb524bc22dfa73a194e3305b6a14"} Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.699343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-353b-account-create-update-smsfg" event={"ID":"a96e2eaa-7162-49ea-adf8-66cc39516d9c","Type":"ContainerStarted","Data":"9a407c9f4ae1c10c046ae914fe8ad46bed390406b62d6aa981300578a286aaa8"} Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.738027 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-f67lq" podStartSLOduration=1.7380049309999999 podStartE2EDuration="1.738004931s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:34.718594103 +0000 UTC m=+6097.462251552" watchObservedRunningTime="2026-02-19 10:02:34.738004931 +0000 UTC m=+6097.481662380" Feb 19 10:02:34 crc kubenswrapper[4780]: I0219 10:02:34.738441 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-353b-account-create-update-smsfg" podStartSLOduration=1.7384350309999999 podStartE2EDuration="1.738435031s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:34.734794062 +0000 UTC m=+6097.478451511" watchObservedRunningTime="2026-02-19 10:02:34.738435031 +0000 UTC m=+6097.482092480" Feb 19 10:02:35 crc kubenswrapper[4780]: I0219 10:02:35.713755 4780 generic.go:334] "Generic (PLEG): container finished" podID="249c7d5e-f058-4d69-8e12-69472cf9c8b0" containerID="7a673e9211bdd465bdf90dc5e8f198c812136e700e79b26380dc4a1da51eaac8" exitCode=0 Feb 19 10:02:35 crc kubenswrapper[4780]: I0219 10:02:35.713823 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f67lq" event={"ID":"249c7d5e-f058-4d69-8e12-69472cf9c8b0","Type":"ContainerDied","Data":"7a673e9211bdd465bdf90dc5e8f198c812136e700e79b26380dc4a1da51eaac8"} Feb 19 10:02:35 crc kubenswrapper[4780]: I0219 10:02:35.716811 4780 generic.go:334] "Generic (PLEG): container finished" podID="a96e2eaa-7162-49ea-adf8-66cc39516d9c" containerID="eebbddba5141dd70ad520909dc034d9e96b2cb524bc22dfa73a194e3305b6a14" exitCode=0 Feb 19 10:02:35 crc kubenswrapper[4780]: I0219 10:02:35.716878 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-353b-account-create-update-smsfg" event={"ID":"a96e2eaa-7162-49ea-adf8-66cc39516d9c","Type":"ContainerDied","Data":"eebbddba5141dd70ad520909dc034d9e96b2cb524bc22dfa73a194e3305b6a14"} Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.221527 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.227378 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.395437 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26rfz\" (UniqueName: \"kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz\") pod \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.396099 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2cqb\" (UniqueName: \"kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb\") pod \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.396435 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts\") pod \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\" (UID: \"249c7d5e-f058-4d69-8e12-69472cf9c8b0\") " Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.396520 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts\") pod \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\" (UID: \"a96e2eaa-7162-49ea-adf8-66cc39516d9c\") " Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.397393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "249c7d5e-f058-4d69-8e12-69472cf9c8b0" (UID: "249c7d5e-f058-4d69-8e12-69472cf9c8b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.397635 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a96e2eaa-7162-49ea-adf8-66cc39516d9c" (UID: "a96e2eaa-7162-49ea-adf8-66cc39516d9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.404362 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz" (OuterVolumeSpecName: "kube-api-access-26rfz") pod "249c7d5e-f058-4d69-8e12-69472cf9c8b0" (UID: "249c7d5e-f058-4d69-8e12-69472cf9c8b0"). InnerVolumeSpecName "kube-api-access-26rfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.406577 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb" (OuterVolumeSpecName: "kube-api-access-l2cqb") pod "a96e2eaa-7162-49ea-adf8-66cc39516d9c" (UID: "a96e2eaa-7162-49ea-adf8-66cc39516d9c"). InnerVolumeSpecName "kube-api-access-l2cqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.499538 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249c7d5e-f058-4d69-8e12-69472cf9c8b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.499586 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96e2eaa-7162-49ea-adf8-66cc39516d9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.499603 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26rfz\" (UniqueName: \"kubernetes.io/projected/249c7d5e-f058-4d69-8e12-69472cf9c8b0-kube-api-access-26rfz\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.499615 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2cqb\" (UniqueName: \"kubernetes.io/projected/a96e2eaa-7162-49ea-adf8-66cc39516d9c-kube-api-access-l2cqb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.750555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f67lq" event={"ID":"249c7d5e-f058-4d69-8e12-69472cf9c8b0","Type":"ContainerDied","Data":"21c2d7a621f4e2c4e677ae555abc3107fbb09d77ed30aa067236bdce53c8ee3d"} Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.750624 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c2d7a621f4e2c4e677ae555abc3107fbb09d77ed30aa067236bdce53c8ee3d" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.750718 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f67lq" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.756386 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-353b-account-create-update-smsfg" event={"ID":"a96e2eaa-7162-49ea-adf8-66cc39516d9c","Type":"ContainerDied","Data":"9a407c9f4ae1c10c046ae914fe8ad46bed390406b62d6aa981300578a286aaa8"} Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.756459 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a407c9f4ae1c10c046ae914fe8ad46bed390406b62d6aa981300578a286aaa8" Feb 19 10:02:37 crc kubenswrapper[4780]: I0219 10:02:37.756466 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-353b-account-create-update-smsfg" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.788832 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-ckt4w"] Feb 19 10:02:38 crc kubenswrapper[4780]: E0219 10:02:38.789690 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96e2eaa-7162-49ea-adf8-66cc39516d9c" containerName="mariadb-account-create-update" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.789705 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96e2eaa-7162-49ea-adf8-66cc39516d9c" containerName="mariadb-account-create-update" Feb 19 10:02:38 crc kubenswrapper[4780]: E0219 10:02:38.789729 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249c7d5e-f058-4d69-8e12-69472cf9c8b0" containerName="mariadb-database-create" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.789736 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="249c7d5e-f058-4d69-8e12-69472cf9c8b0" containerName="mariadb-database-create" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.789941 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="249c7d5e-f058-4d69-8e12-69472cf9c8b0" containerName="mariadb-database-create" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.789960 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96e2eaa-7162-49ea-adf8-66cc39516d9c" containerName="mariadb-account-create-update" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.790820 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.793347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.794359 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.794686 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4wljw" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.804849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ckt4w"] Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.806002 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.945955 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.946042 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvh92\" (UniqueName: \"kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.946791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:38 crc kubenswrapper[4780]: I0219 10:02:38.947062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.049200 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.049281 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.049343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvh92\" (UniqueName: \"kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.049440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.110524 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.111022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.121011 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvh92\" (UniqueName: \"kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.121272 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data\") pod \"aodh-db-sync-ckt4w\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.135498 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.664631 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ckt4w"] Feb 19 10:02:39 crc kubenswrapper[4780]: I0219 10:02:39.780950 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ckt4w" event={"ID":"bb40d031-7af4-4922-bbe3-9d14ab3e80ff","Type":"ContainerStarted","Data":"7c65c07fff1702e519c95c7881ac7a62ded29898e60ac8502127c514a9a9ef1d"} Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.050149 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8l9ts"] Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.062548 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f2b9-account-create-update-4tz6x"] Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.073422 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8l9ts"] Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.084695 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f2b9-account-create-update-4tz6x"] Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.854680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ckt4w" event={"ID":"bb40d031-7af4-4922-bbe3-9d14ab3e80ff","Type":"ContainerStarted","Data":"30fc7b8e3590a849452ff1aede5a708f562f72894e44984afaeae8805f3ada5c"} Feb 19 10:02:44 crc kubenswrapper[4780]: I0219 10:02:44.884382 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-ckt4w" podStartSLOduration=2.204273148 podStartE2EDuration="6.884358416s" podCreationTimestamp="2026-02-19 10:02:38 +0000 UTC" firstStartedPulling="2026-02-19 10:02:39.682392736 +0000 UTC m=+6102.426050195" lastFinishedPulling="2026-02-19 10:02:44.362477994 +0000 UTC m=+6107.106135463" observedRunningTime="2026-02-19 10:02:44.879085106 +0000 UTC m=+6107.622742555" watchObservedRunningTime="2026-02-19 10:02:44.884358416 +0000 UTC m=+6107.628015885" Feb 19 10:02:45 crc kubenswrapper[4780]: I0219 10:02:45.966928 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285bbba2-f55d-49c4-af5a-cabda95e1597" path="/var/lib/kubelet/pods/285bbba2-f55d-49c4-af5a-cabda95e1597/volumes" Feb 19 10:02:45 crc kubenswrapper[4780]: I0219 10:02:45.968886 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba5a781-5b30-4c8c-b25e-55a268eb2353" path="/var/lib/kubelet/pods/5ba5a781-5b30-4c8c-b25e-55a268eb2353/volumes" Feb 19 10:02:47 crc kubenswrapper[4780]: I0219 10:02:47.887391 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb40d031-7af4-4922-bbe3-9d14ab3e80ff" containerID="30fc7b8e3590a849452ff1aede5a708f562f72894e44984afaeae8805f3ada5c" exitCode=0 Feb 19 10:02:47 crc kubenswrapper[4780]: I0219 10:02:47.887472 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ckt4w" event={"ID":"bb40d031-7af4-4922-bbe3-9d14ab3e80ff","Type":"ContainerDied","Data":"30fc7b8e3590a849452ff1aede5a708f562f72894e44984afaeae8805f3ada5c"} Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.384389 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.505932 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle\") pod \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.506323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data\") pod \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.506388 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvh92\" (UniqueName: \"kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92\") pod \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.506453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts\") pod \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\" (UID: \"bb40d031-7af4-4922-bbe3-9d14ab3e80ff\") " Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.514094 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92" (OuterVolumeSpecName: "kube-api-access-tvh92") pod "bb40d031-7af4-4922-bbe3-9d14ab3e80ff" (UID: "bb40d031-7af4-4922-bbe3-9d14ab3e80ff"). InnerVolumeSpecName "kube-api-access-tvh92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.515851 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts" (OuterVolumeSpecName: "scripts") pod "bb40d031-7af4-4922-bbe3-9d14ab3e80ff" (UID: "bb40d031-7af4-4922-bbe3-9d14ab3e80ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.563491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data" (OuterVolumeSpecName: "config-data") pod "bb40d031-7af4-4922-bbe3-9d14ab3e80ff" (UID: "bb40d031-7af4-4922-bbe3-9d14ab3e80ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.564487 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb40d031-7af4-4922-bbe3-9d14ab3e80ff" (UID: "bb40d031-7af4-4922-bbe3-9d14ab3e80ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.610092 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.610156 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.610173 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvh92\" (UniqueName: \"kubernetes.io/projected/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-kube-api-access-tvh92\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.610185 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb40d031-7af4-4922-bbe3-9d14ab3e80ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.930633 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ckt4w" event={"ID":"bb40d031-7af4-4922-bbe3-9d14ab3e80ff","Type":"ContainerDied","Data":"7c65c07fff1702e519c95c7881ac7a62ded29898e60ac8502127c514a9a9ef1d"} Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.930687 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c65c07fff1702e519c95c7881ac7a62ded29898e60ac8502127c514a9a9ef1d" Feb 19 10:02:49 crc kubenswrapper[4780]: I0219 10:02:49.930774 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ckt4w" Feb 19 10:02:51 crc kubenswrapper[4780]: I0219 10:02:51.931542 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:02:52 crc kubenswrapper[4780]: I0219 10:02:52.044030 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gpgsw"] Feb 19 10:02:52 crc kubenswrapper[4780]: I0219 10:02:52.072916 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gpgsw"] Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.462065 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 10:02:53 crc kubenswrapper[4780]: E0219 10:02:53.462577 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb40d031-7af4-4922-bbe3-9d14ab3e80ff" containerName="aodh-db-sync" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.462593 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb40d031-7af4-4922-bbe3-9d14ab3e80ff" containerName="aodh-db-sync" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.462775 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb40d031-7af4-4922-bbe3-9d14ab3e80ff" containerName="aodh-db-sync" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.464892 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.467711 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.468290 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4wljw" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.468878 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.493990 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.618591 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-scripts\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.618975 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.619180 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m78\" (UniqueName: \"kubernetes.io/projected/8155dc58-df40-44b2-8a5f-913ece382018-kube-api-access-c4m78\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.619326 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-config-data\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.721568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.721626 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m78\" (UniqueName: \"kubernetes.io/projected/8155dc58-df40-44b2-8a5f-913ece382018-kube-api-access-c4m78\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.721681 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-config-data\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.721766 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-scripts\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.730882 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-config-data\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.732396 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-scripts\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.732624 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8155dc58-df40-44b2-8a5f-913ece382018-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.740046 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m78\" (UniqueName: \"kubernetes.io/projected/8155dc58-df40-44b2-8a5f-913ece382018-kube-api-access-c4m78\") pod \"aodh-0\" (UID: \"8155dc58-df40-44b2-8a5f-913ece382018\") " pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.796029 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 10:02:53 crc kubenswrapper[4780]: I0219 10:02:53.959891 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c893a481-a297-4a71-8aed-a90c65624477" path="/var/lib/kubelet/pods/c893a481-a297-4a71-8aed-a90c65624477/volumes" Feb 19 10:02:54 crc kubenswrapper[4780]: I0219 10:02:54.392402 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.037389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8155dc58-df40-44b2-8a5f-913ece382018","Type":"ContainerStarted","Data":"4f74f63bfefbd07f2805f3a5f56af822333053e716a4b84674206f4888549d44"} Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.240237 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.241618 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-central-agent" containerID="cri-o://fb35ed8e30ddc39cc5750abdeecff0a722ebd32f632c440f2f3295cd9021ee78" gracePeriod=30 Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.241790 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="proxy-httpd" containerID="cri-o://d020f2920fa0451328833d279e83d1f4eb288ada42568d5c225f9444569af3f3" gracePeriod=30 Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.241835 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="sg-core" containerID="cri-o://0c404cdd6ab1c590f48eb703f3ed6a198d516a2ffa454fac4e5ca670f2021438" gracePeriod=30 Feb 19 10:02:55 crc kubenswrapper[4780]: I0219 10:02:55.241868 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-notification-agent" containerID="cri-o://0432ac2753e983970d66683c4772de83662d57b42f8f24f2f5aa30132797e2d3" gracePeriod=30 Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.051983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8155dc58-df40-44b2-8a5f-913ece382018","Type":"ContainerStarted","Data":"f2c4280eda6e44da0411c1d0de35d85c30b67481ebee579b3d8f391c991d60db"} Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058450 4780 generic.go:334] "Generic (PLEG): container finished" podID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerID="d020f2920fa0451328833d279e83d1f4eb288ada42568d5c225f9444569af3f3" exitCode=0 Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058490 4780 generic.go:334] "Generic (PLEG): container finished" podID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerID="0c404cdd6ab1c590f48eb703f3ed6a198d516a2ffa454fac4e5ca670f2021438" exitCode=2 Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058500 4780 generic.go:334] "Generic (PLEG): container finished" podID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerID="fb35ed8e30ddc39cc5750abdeecff0a722ebd32f632c440f2f3295cd9021ee78" exitCode=0 Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerDied","Data":"d020f2920fa0451328833d279e83d1f4eb288ada42568d5c225f9444569af3f3"} Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerDied","Data":"0c404cdd6ab1c590f48eb703f3ed6a198d516a2ffa454fac4e5ca670f2021438"} Feb 19 10:02:56 crc kubenswrapper[4780]: I0219 10:02:56.058574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerDied","Data":"fb35ed8e30ddc39cc5750abdeecff0a722ebd32f632c440f2f3295cd9021ee78"} Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.077571 4780 generic.go:334] "Generic (PLEG): container finished" podID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerID="0432ac2753e983970d66683c4772de83662d57b42f8f24f2f5aa30132797e2d3" exitCode=0 Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.077978 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerDied","Data":"0432ac2753e983970d66683c4772de83662d57b42f8f24f2f5aa30132797e2d3"} Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.359680 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531468 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531581 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531606 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll7jk\" (UniqueName: \"kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531728 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531813 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.531913 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd\") pod \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\" (UID: \"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0\") " Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.532559 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.532680 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.537331 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts" (OuterVolumeSpecName: "scripts") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.537897 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk" (OuterVolumeSpecName: "kube-api-access-ll7jk") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "kube-api-access-ll7jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.580239 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.634757 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.634796 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll7jk\" (UniqueName: \"kubernetes.io/projected/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-kube-api-access-ll7jk\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.634805 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.634814 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.634823 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.637297 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.649303 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data" (OuterVolumeSpecName: "config-data") pod "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" (UID: "de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.736647 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4780]: I0219 10:02:57.736692 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.092807 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.092827 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0","Type":"ContainerDied","Data":"8ce0f5c1d42f56b893743a50b856a4cca8f66578d399ae84c36c9fe0b19a1f40"} Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.093013 4780 scope.go:117] "RemoveContainer" containerID="d020f2920fa0451328833d279e83d1f4eb288ada42568d5c225f9444569af3f3" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.095184 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8155dc58-df40-44b2-8a5f-913ece382018","Type":"ContainerStarted","Data":"f450f72d975a9b801e7ebada6ed4f5e6a60d6f97b693ded9937f534b810538d6"} Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.122593 4780 scope.go:117] "RemoveContainer" containerID="0c404cdd6ab1c590f48eb703f3ed6a198d516a2ffa454fac4e5ca670f2021438" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.140225 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.163501 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.177337 4780 scope.go:117] "RemoveContainer" containerID="0432ac2753e983970d66683c4772de83662d57b42f8f24f2f5aa30132797e2d3" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.182603 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:58 crc kubenswrapper[4780]: E0219 10:02:58.183227 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="sg-core" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183253 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="sg-core" Feb 19 10:02:58 crc kubenswrapper[4780]: E0219 10:02:58.183266 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="proxy-httpd" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183277 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="proxy-httpd" Feb 19 10:02:58 crc kubenswrapper[4780]: E0219 10:02:58.183304 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-notification-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183314 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-notification-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: E0219 10:02:58.183338 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-central-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183346 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-central-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183600 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-notification-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183632 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="sg-core" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183658 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="ceilometer-central-agent" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.183671 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" containerName="proxy-httpd" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.187345 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.190150 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.191971 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.194054 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.223612 4780 scope.go:117] "RemoveContainer" containerID="fb35ed8e30ddc39cc5750abdeecff0a722ebd32f632c440f2f3295cd9021ee78" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phd9m\" (UniqueName: \"kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265488 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.265544 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367511 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phd9m\" (UniqueName: \"kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367666 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367720 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367763 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367863 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.367899 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.369157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.369402 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.374452 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.374604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.375797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.376161 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.402106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phd9m\" (UniqueName: \"kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m\") pod \"ceilometer-0\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " pod="openstack/ceilometer-0" Feb 19 10:02:58 crc kubenswrapper[4780]: I0219 10:02:58.524210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:59 crc kubenswrapper[4780]: I0219 10:02:59.258869 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:59 crc kubenswrapper[4780]: I0219 10:02:59.968003 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0" path="/var/lib/kubelet/pods/de788eb3-b2f0-463a-b6ed-1ebc86b4a4f0/volumes" Feb 19 10:03:00 crc kubenswrapper[4780]: I0219 10:03:00.164342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerStarted","Data":"dad71f9b7296a49078a3bb3a30924875849fd5212d4b74310924bc1374ea4c4f"} Feb 19 10:03:00 crc kubenswrapper[4780]: I0219 10:03:00.194504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8155dc58-df40-44b2-8a5f-913ece382018","Type":"ContainerStarted","Data":"1bf0bc5098919dc9ad0d94616a799bcade6806ec18ca446bfa206526d7bf145a"} Feb 19 10:03:00 crc kubenswrapper[4780]: E0219 10:03:00.218881 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:01 crc kubenswrapper[4780]: I0219 10:03:01.208233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerStarted","Data":"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7"} Feb 19 10:03:02 crc kubenswrapper[4780]: I0219 10:03:02.224229 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerStarted","Data":"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328"} Feb 19 10:03:02 crc kubenswrapper[4780]: I0219 10:03:02.225870 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerStarted","Data":"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99"} Feb 19 10:03:02 crc kubenswrapper[4780]: I0219 10:03:02.236250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8155dc58-df40-44b2-8a5f-913ece382018","Type":"ContainerStarted","Data":"f777634e30efbce4a3391b91e86c93f1ff9f40513740ff28a9b5c2e6863633ec"} Feb 19 10:03:02 crc kubenswrapper[4780]: I0219 10:03:02.269854 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.647859387 podStartE2EDuration="9.269823679s" podCreationTimestamp="2026-02-19 10:02:53 +0000 UTC" firstStartedPulling="2026-02-19 10:02:54.405943808 +0000 UTC m=+6117.149601257" lastFinishedPulling="2026-02-19 10:03:01.0279081 +0000 UTC m=+6123.771565549" observedRunningTime="2026-02-19 10:03:02.258339844 +0000 UTC m=+6125.001997293" watchObservedRunningTime="2026-02-19 10:03:02.269823679 +0000 UTC m=+6125.013481128" Feb 19 10:03:04 crc kubenswrapper[4780]: I0219 10:03:04.262259 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerStarted","Data":"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709"} Feb 19 10:03:04 crc kubenswrapper[4780]: I0219 10:03:04.263167 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:04 crc kubenswrapper[4780]: I0219 10:03:04.304814 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.177687839 podStartE2EDuration="6.304794792s" podCreationTimestamp="2026-02-19 10:02:58 +0000 UTC" firstStartedPulling="2026-02-19 10:02:59.266002015 +0000 UTC m=+6122.009659464" lastFinishedPulling="2026-02-19 10:03:03.393108948 +0000 UTC m=+6126.136766417" observedRunningTime="2026-02-19 10:03:04.293059591 +0000 UTC m=+6127.036717050" watchObservedRunningTime="2026-02-19 10:03:04.304794792 +0000 UTC m=+6127.048452241" Feb 19 10:03:06 crc kubenswrapper[4780]: I0219 10:03:06.336324 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:03:06 crc kubenswrapper[4780]: I0219 10:03:06.336769 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.826221 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-tpz2f"] Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.828014 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.845867 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-tpz2f"] Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.939900 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-963e-account-create-update-hlbfq"] Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.941207 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.943263 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.957547 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-963e-account-create-update-hlbfq"] Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.989524 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:08 crc kubenswrapper[4780]: I0219 10:03:08.989685 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjsh\" (UniqueName: \"kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.092456 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgts\" (UniqueName: \"kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.092574 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjsh\" (UniqueName: \"kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.092663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.092767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.093749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.117510 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjsh\" (UniqueName: \"kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh\") pod \"manila-db-create-tpz2f\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.166572 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.194616 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgts\" (UniqueName: \"kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.194785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.195715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.221813 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgts\" (UniqueName: \"kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts\") pod \"manila-963e-account-create-update-hlbfq\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.258711 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.655827 4780 scope.go:117] "RemoveContainer" containerID="1275bb9bba97972a48e7bb4529a071acc50ad1b27f970d0705195d2f664f57b0" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.719328 4780 scope.go:117] "RemoveContainer" containerID="8b9986a86cd31b978e536d485323646ffa530cbf1565d28d0ea4056169645c6b" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.804564 4780 scope.go:117] "RemoveContainer" containerID="f53502caa130f7fe7be08951a0a90f58a9df789b99aac9addf21fbec89e56809" Feb 19 10:03:09 crc kubenswrapper[4780]: I0219 10:03:09.868263 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-tpz2f"] Feb 19 10:03:10 crc kubenswrapper[4780]: I0219 10:03:10.041938 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-963e-account-create-update-hlbfq"] Feb 19 10:03:10 crc kubenswrapper[4780]: I0219 10:03:10.373027 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-963e-account-create-update-hlbfq" event={"ID":"0804b016-a025-4a0e-b9c9-1e7821b7d036","Type":"ContainerStarted","Data":"e1fd303898467b6e82324f081bbabee7fd26032e0666b7b93e03a95d9790bbcc"} Feb 19 10:03:10 crc kubenswrapper[4780]: I0219 10:03:10.384776 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tpz2f" event={"ID":"ab504879-e849-4ba6-8192-07859f31f11d","Type":"ContainerStarted","Data":"b6b9b8e85f4ea5d29151394b3a8bb6313dca01dc462bbf82fe2f5f5fd425ff07"} Feb 19 10:03:10 crc kubenswrapper[4780]: I0219 10:03:10.384839 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tpz2f" event={"ID":"ab504879-e849-4ba6-8192-07859f31f11d","Type":"ContainerStarted","Data":"770e7476bd6b1b4a8fa392214a8082a12bd7312ef7ecf895c52e78aa7954d244"} Feb 19 10:03:10 crc kubenswrapper[4780]: I0219 10:03:10.422034 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-tpz2f" podStartSLOduration=2.422001066 podStartE2EDuration="2.422001066s" podCreationTimestamp="2026-02-19 10:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:10.406651756 +0000 UTC m=+6133.150309205" watchObservedRunningTime="2026-02-19 10:03:10.422001066 +0000 UTC m=+6133.165658515" Feb 19 10:03:10 crc kubenswrapper[4780]: E0219 10:03:10.575443 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab504879_e849_4ba6_8192_07859f31f11d.slice/crio-conmon-b6b9b8e85f4ea5d29151394b3a8bb6313dca01dc462bbf82fe2f5f5fd425ff07.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:11 crc kubenswrapper[4780]: I0219 10:03:11.396430 4780 generic.go:334] "Generic (PLEG): container finished" podID="ab504879-e849-4ba6-8192-07859f31f11d" containerID="b6b9b8e85f4ea5d29151394b3a8bb6313dca01dc462bbf82fe2f5f5fd425ff07" exitCode=0 Feb 19 10:03:11 crc kubenswrapper[4780]: I0219 10:03:11.396521 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tpz2f" event={"ID":"ab504879-e849-4ba6-8192-07859f31f11d","Type":"ContainerDied","Data":"b6b9b8e85f4ea5d29151394b3a8bb6313dca01dc462bbf82fe2f5f5fd425ff07"} Feb 19 10:03:11 crc kubenswrapper[4780]: I0219 10:03:11.398915 4780 generic.go:334] "Generic (PLEG): container finished" podID="0804b016-a025-4a0e-b9c9-1e7821b7d036" containerID="d2df5cf98640ebf85f99699e4f5b9b749aba93771bb5160dc7dca34b2a398819" exitCode=0 Feb 19 10:03:11 crc kubenswrapper[4780]: I0219 10:03:11.398987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-963e-account-create-update-hlbfq" event={"ID":"0804b016-a025-4a0e-b9c9-1e7821b7d036","Type":"ContainerDied","Data":"d2df5cf98640ebf85f99699e4f5b9b749aba93771bb5160dc7dca34b2a398819"} Feb 19 10:03:12 crc kubenswrapper[4780]: I0219 10:03:12.983514 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:12 crc kubenswrapper[4780]: I0219 10:03:12.993339 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.118377 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts\") pod \"0804b016-a025-4a0e-b9c9-1e7821b7d036\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.118542 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qgts\" (UniqueName: \"kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts\") pod \"0804b016-a025-4a0e-b9c9-1e7821b7d036\" (UID: \"0804b016-a025-4a0e-b9c9-1e7821b7d036\") " Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.118823 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tjsh\" (UniqueName: \"kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh\") pod \"ab504879-e849-4ba6-8192-07859f31f11d\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.118853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts\") pod \"ab504879-e849-4ba6-8192-07859f31f11d\" (UID: \"ab504879-e849-4ba6-8192-07859f31f11d\") " Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.119066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0804b016-a025-4a0e-b9c9-1e7821b7d036" (UID: "0804b016-a025-4a0e-b9c9-1e7821b7d036"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.119972 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab504879-e849-4ba6-8192-07859f31f11d" (UID: "ab504879-e849-4ba6-8192-07859f31f11d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.120844 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0804b016-a025-4a0e-b9c9-1e7821b7d036-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.120913 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab504879-e849-4ba6-8192-07859f31f11d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.127221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh" (OuterVolumeSpecName: "kube-api-access-5tjsh") pod "ab504879-e849-4ba6-8192-07859f31f11d" (UID: "ab504879-e849-4ba6-8192-07859f31f11d"). InnerVolumeSpecName "kube-api-access-5tjsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.127517 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts" (OuterVolumeSpecName: "kube-api-access-2qgts") pod "0804b016-a025-4a0e-b9c9-1e7821b7d036" (UID: "0804b016-a025-4a0e-b9c9-1e7821b7d036"). InnerVolumeSpecName "kube-api-access-2qgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.223373 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tjsh\" (UniqueName: \"kubernetes.io/projected/ab504879-e849-4ba6-8192-07859f31f11d-kube-api-access-5tjsh\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.223813 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qgts\" (UniqueName: \"kubernetes.io/projected/0804b016-a025-4a0e-b9c9-1e7821b7d036-kube-api-access-2qgts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.427025 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tpz2f" event={"ID":"ab504879-e849-4ba6-8192-07859f31f11d","Type":"ContainerDied","Data":"770e7476bd6b1b4a8fa392214a8082a12bd7312ef7ecf895c52e78aa7954d244"} Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.427081 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="770e7476bd6b1b4a8fa392214a8082a12bd7312ef7ecf895c52e78aa7954d244" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.427038 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tpz2f" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.431318 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-963e-account-create-update-hlbfq" event={"ID":"0804b016-a025-4a0e-b9c9-1e7821b7d036","Type":"ContainerDied","Data":"e1fd303898467b6e82324f081bbabee7fd26032e0666b7b93e03a95d9790bbcc"} Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.431370 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1fd303898467b6e82324f081bbabee7fd26032e0666b7b93e03a95d9790bbcc" Feb 19 10:03:13 crc kubenswrapper[4780]: I0219 10:03:13.431441 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-963e-account-create-update-hlbfq" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.263316 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-tjbkn"] Feb 19 10:03:19 crc kubenswrapper[4780]: E0219 10:03:19.264713 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab504879-e849-4ba6-8192-07859f31f11d" containerName="mariadb-database-create" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.264740 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab504879-e849-4ba6-8192-07859f31f11d" containerName="mariadb-database-create" Feb 19 10:03:19 crc kubenswrapper[4780]: E0219 10:03:19.264827 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0804b016-a025-4a0e-b9c9-1e7821b7d036" containerName="mariadb-account-create-update" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.264843 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0804b016-a025-4a0e-b9c9-1e7821b7d036" containerName="mariadb-account-create-update" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.265218 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0804b016-a025-4a0e-b9c9-1e7821b7d036" containerName="mariadb-account-create-update" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.265256 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab504879-e849-4ba6-8192-07859f31f11d" containerName="mariadb-database-create" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.266304 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.269383 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tlswr" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.271090 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.286803 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-tjbkn"] Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.387536 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675sg\" (UniqueName: \"kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.387852 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.389252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.389510 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.492029 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.492192 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.492265 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675sg\" (UniqueName: \"kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.492308 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.502963 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.504209 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.508733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.520010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675sg\" (UniqueName: \"kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg\") pod \"manila-db-sync-tjbkn\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:19 crc kubenswrapper[4780]: I0219 10:03:19.609752 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:20 crc kubenswrapper[4780]: E0219 10:03:20.851138 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:21 crc kubenswrapper[4780]: I0219 10:03:21.240731 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-tjbkn"] Feb 19 10:03:21 crc kubenswrapper[4780]: I0219 10:03:21.582059 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tjbkn" event={"ID":"6657099e-4964-425d-8036-6d34d3c7faf4","Type":"ContainerStarted","Data":"34366af3f5a6b56ea36c9a9d15ce6710e3a2fd89731ef40c0c0228fdbeaaa2bd"} Feb 19 10:03:26 crc kubenswrapper[4780]: I0219 10:03:26.646292 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tjbkn" event={"ID":"6657099e-4964-425d-8036-6d34d3c7faf4","Type":"ContainerStarted","Data":"478b067e30d0d42789728b14d143d43c3ac92560f5e250ae3df6955e55cba3ea"} Feb 19 10:03:26 crc kubenswrapper[4780]: I0219 10:03:26.670407 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-tjbkn" podStartSLOduration=3.5393367060000003 podStartE2EDuration="7.67038436s" podCreationTimestamp="2026-02-19 10:03:19 +0000 UTC" firstStartedPulling="2026-02-19 10:03:21.231998082 +0000 UTC m=+6143.975655531" lastFinishedPulling="2026-02-19 10:03:25.363045726 +0000 UTC m=+6148.106703185" observedRunningTime="2026-02-19 10:03:26.660021981 +0000 UTC m=+6149.403679430" watchObservedRunningTime="2026-02-19 10:03:26.67038436 +0000 UTC m=+6149.414041809" Feb 19 10:03:28 crc kubenswrapper[4780]: I0219 10:03:28.538019 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:03:28 crc kubenswrapper[4780]: I0219 10:03:28.673517 4780 generic.go:334] "Generic (PLEG): container finished" podID="6657099e-4964-425d-8036-6d34d3c7faf4" containerID="478b067e30d0d42789728b14d143d43c3ac92560f5e250ae3df6955e55cba3ea" exitCode=0 Feb 19 10:03:28 crc kubenswrapper[4780]: I0219 10:03:28.673580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tjbkn" event={"ID":"6657099e-4964-425d-8036-6d34d3c7faf4","Type":"ContainerDied","Data":"478b067e30d0d42789728b14d143d43c3ac92560f5e250ae3df6955e55cba3ea"} Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.237706 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.381023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-675sg\" (UniqueName: \"kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg\") pod \"6657099e-4964-425d-8036-6d34d3c7faf4\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.381161 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data\") pod \"6657099e-4964-425d-8036-6d34d3c7faf4\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.381226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle\") pod \"6657099e-4964-425d-8036-6d34d3c7faf4\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.381265 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data\") pod \"6657099e-4964-425d-8036-6d34d3c7faf4\" (UID: \"6657099e-4964-425d-8036-6d34d3c7faf4\") " Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.391367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg" (OuterVolumeSpecName: "kube-api-access-675sg") pod "6657099e-4964-425d-8036-6d34d3c7faf4" (UID: "6657099e-4964-425d-8036-6d34d3c7faf4"). InnerVolumeSpecName "kube-api-access-675sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.403583 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data" (OuterVolumeSpecName: "config-data") pod "6657099e-4964-425d-8036-6d34d3c7faf4" (UID: "6657099e-4964-425d-8036-6d34d3c7faf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.403760 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "6657099e-4964-425d-8036-6d34d3c7faf4" (UID: "6657099e-4964-425d-8036-6d34d3c7faf4"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.415631 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6657099e-4964-425d-8036-6d34d3c7faf4" (UID: "6657099e-4964-425d-8036-6d34d3c7faf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.483476 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.483690 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.483771 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-675sg\" (UniqueName: \"kubernetes.io/projected/6657099e-4964-425d-8036-6d34d3c7faf4-kube-api-access-675sg\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.483828 4780 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6657099e-4964-425d-8036-6d34d3c7faf4-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.699849 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tjbkn" event={"ID":"6657099e-4964-425d-8036-6d34d3c7faf4","Type":"ContainerDied","Data":"34366af3f5a6b56ea36c9a9d15ce6710e3a2fd89731ef40c0c0228fdbeaaa2bd"} Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.700320 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34366af3f5a6b56ea36c9a9d15ce6710e3a2fd89731ef40c0c0228fdbeaaa2bd" Feb 19 10:03:30 crc kubenswrapper[4780]: I0219 10:03:30.699964 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tjbkn" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.313234 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: E0219 10:03:31.315207 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6657099e-4964-425d-8036-6d34d3c7faf4" containerName="manila-db-sync" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.315310 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6657099e-4964-425d-8036-6d34d3c7faf4" containerName="manila-db-sync" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.315731 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6657099e-4964-425d-8036-6d34d3c7faf4" containerName="manila-db-sync" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.317611 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.323488 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.323811 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tlswr" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.326622 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.328062 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.338639 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.348969 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.360371 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.372771 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.399326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.422678 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423620 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w8qd\" (UniqueName: \"kubernetes.io/projected/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-kube-api-access-2w8qd\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423724 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423856 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-scripts\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.423891 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.425703 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.471258 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:03:31 crc kubenswrapper[4780]: E0219 10:03:31.473589 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.529980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mj7b\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-kube-api-access-6mj7b\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-ceph\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530136 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530173 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530193 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w8qd\" (UniqueName: \"kubernetes.io/projected/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-kube-api-access-2w8qd\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530226 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530270 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530358 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-scripts\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530380 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530398 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530424 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wt5\" (UniqueName: \"kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530457 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530510 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.530526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-scripts\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.540525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.540626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.557000 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.557384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-scripts\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.564097 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.570850 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.572748 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-config-data\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.578618 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.584871 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w8qd\" (UniqueName: \"kubernetes.io/projected/e9a884f6-60ed-4e43-9d3d-9c005737cc3d-kube-api-access-2w8qd\") pod \"manila-scheduler-0\" (UID: \"e9a884f6-60ed-4e43-9d3d-9c005737cc3d\") " pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.609335 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634118 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634236 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-ceph\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634410 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634465 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634620 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wt5\" (UniqueName: \"kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634655 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634700 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634776 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-scripts\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.634798 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mj7b\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-kube-api-access-6mj7b\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.636399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.637258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.637336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.643087 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.644284 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1211f1b2-3545-4ac9-8913-63ee3ed133ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.652692 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.659081 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.669736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-ceph\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.671709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-scripts\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.672382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mj7b\" (UniqueName: \"kubernetes.io/projected/1211f1b2-3545-4ac9-8913-63ee3ed133ad-kube-api-access-6mj7b\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.687676 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.689219 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wt5\" (UniqueName: \"kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5\") pod \"dnsmasq-dns-558f8558d5-g2glz\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.690009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1211f1b2-3545-4ac9-8913-63ee3ed133ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1211f1b2-3545-4ac9-8913-63ee3ed133ad\") " pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.735682 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736546 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data-custom\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736668 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-etc-machine-id\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsb72\" (UniqueName: \"kubernetes.io/projected/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-kube-api-access-jsb72\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736870 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-logs\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.736931 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-scripts\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.789071 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.818461 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843017 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-etc-machine-id\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843098 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsb72\" (UniqueName: \"kubernetes.io/projected/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-kube-api-access-jsb72\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-logs\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-scripts\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843367 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.843452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data-custom\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.845019 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-etc-machine-id\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.851672 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-logs\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.855093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.856631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.857417 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-config-data-custom\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.858212 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-scripts\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:31 crc kubenswrapper[4780]: I0219 10:03:31.862411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsb72\" (UniqueName: \"kubernetes.io/projected/5c366691-b5b9-4dd7-a9c1-0ac7a6542898-kube-api-access-jsb72\") pod \"manila-api-0\" (UID: \"5c366691-b5b9-4dd7-a9c1-0ac7a6542898\") " pod="openstack/manila-api-0" Feb 19 10:03:32 crc kubenswrapper[4780]: I0219 10:03:32.137643 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 19 10:03:32 crc kubenswrapper[4780]: I0219 10:03:32.451292 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 10:03:32 crc kubenswrapper[4780]: I0219 10:03:32.736004 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 10:03:32 crc kubenswrapper[4780]: I0219 10:03:32.741746 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e9a884f6-60ed-4e43-9d3d-9c005737cc3d","Type":"ContainerStarted","Data":"bb4490d453011ea6635e7e3cdc36433df853344f22544b793e7da2de5ccd09c6"} Feb 19 10:03:32 crc kubenswrapper[4780]: W0219 10:03:32.743636 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1211f1b2_3545_4ac9_8913_63ee3ed133ad.slice/crio-2486c64a9822131acb7d84a8ee08867ad07efe2411a2358ede098e8b7fa795c2 WatchSource:0}: Error finding container 2486c64a9822131acb7d84a8ee08867ad07efe2411a2358ede098e8b7fa795c2: Status 404 returned error can't find the container with id 2486c64a9822131acb7d84a8ee08867ad07efe2411a2358ede098e8b7fa795c2 Feb 19 10:03:32 crc kubenswrapper[4780]: I0219 10:03:32.760974 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:03:32 crc kubenswrapper[4780]: W0219 10:03:32.761724 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc04cb27_cdce_4b21_9bf9_6a1054e56c03.slice/crio-b2dac0b1d254e947020085eef072e70ca0aee531b7ba6061cf4334714f7d1cac WatchSource:0}: Error finding container b2dac0b1d254e947020085eef072e70ca0aee531b7ba6061cf4334714f7d1cac: Status 404 returned error can't find the container with id b2dac0b1d254e947020085eef072e70ca0aee531b7ba6061cf4334714f7d1cac Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.087834 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.758481 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1211f1b2-3545-4ac9-8913-63ee3ed133ad","Type":"ContainerStarted","Data":"2486c64a9822131acb7d84a8ee08867ad07efe2411a2358ede098e8b7fa795c2"} Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.761909 4780 generic.go:334] "Generic (PLEG): container finished" podID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerID="e8f14149a4f0a4270a08e503853374ae250b6bdc22be58b6a4f7d3df4188f847" exitCode=0 Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.761986 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" event={"ID":"bc04cb27-cdce-4b21-9bf9-6a1054e56c03","Type":"ContainerDied","Data":"e8f14149a4f0a4270a08e503853374ae250b6bdc22be58b6a4f7d3df4188f847"} Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.762015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" event={"ID":"bc04cb27-cdce-4b21-9bf9-6a1054e56c03","Type":"ContainerStarted","Data":"b2dac0b1d254e947020085eef072e70ca0aee531b7ba6061cf4334714f7d1cac"} Feb 19 10:03:33 crc kubenswrapper[4780]: I0219 10:03:33.768248 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5c366691-b5b9-4dd7-a9c1-0ac7a6542898","Type":"ContainerStarted","Data":"9a3ac6f52faec0841db2c146648dff74f1f729b82d03edbd608bb22a63c4c8dc"} Feb 19 10:03:34 crc kubenswrapper[4780]: I0219 10:03:34.815493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5c366691-b5b9-4dd7-a9c1-0ac7a6542898","Type":"ContainerStarted","Data":"8f768960d4f9252828d45ec9bf61b7a5586a69bb482cfeda826104fcd30011e3"} Feb 19 10:03:34 crc kubenswrapper[4780]: I0219 10:03:34.852468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e9a884f6-60ed-4e43-9d3d-9c005737cc3d","Type":"ContainerStarted","Data":"50d9660183b9f83a32f4a9fb0b59149a269ecbd5d67c1277ee8504dcca2528d4"} Feb 19 10:03:34 crc kubenswrapper[4780]: I0219 10:03:34.862444 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" event={"ID":"bc04cb27-cdce-4b21-9bf9-6a1054e56c03","Type":"ContainerStarted","Data":"c835cdc8838d5ea2fd108dcbd65813f3cd4a17244dfe396a9af7315775b13ae1"} Feb 19 10:03:34 crc kubenswrapper[4780]: I0219 10:03:34.862852 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:34 crc kubenswrapper[4780]: I0219 10:03:34.899909 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" podStartSLOduration=3.899875461 podStartE2EDuration="3.899875461s" podCreationTimestamp="2026-02-19 10:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:34.892178848 +0000 UTC m=+6157.635836297" watchObservedRunningTime="2026-02-19 10:03:34.899875461 +0000 UTC m=+6157.643532910" Feb 19 10:03:35 crc kubenswrapper[4780]: I0219 10:03:35.880485 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5c366691-b5b9-4dd7-a9c1-0ac7a6542898","Type":"ContainerStarted","Data":"09e1ffa9193bbe2512b17334589d9d8a05bee3c2f167f94a49e7751b19258a98"} Feb 19 10:03:35 crc kubenswrapper[4780]: I0219 10:03:35.891285 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 19 10:03:35 crc kubenswrapper[4780]: I0219 10:03:35.894759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e9a884f6-60ed-4e43-9d3d-9c005737cc3d","Type":"ContainerStarted","Data":"36d7438dc1ffc600d25bc02a21e234ec3b7ede6d5f3b421b19d1af9104748b2c"} Feb 19 10:03:35 crc kubenswrapper[4780]: I0219 10:03:35.915773 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.915746611 podStartE2EDuration="4.915746611s" podCreationTimestamp="2026-02-19 10:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:35.909667659 +0000 UTC m=+6158.653325138" watchObservedRunningTime="2026-02-19 10:03:35.915746611 +0000 UTC m=+6158.659404060" Feb 19 10:03:35 crc kubenswrapper[4780]: I0219 10:03:35.943649 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.761855053 podStartE2EDuration="4.94362442s" podCreationTimestamp="2026-02-19 10:03:31 +0000 UTC" firstStartedPulling="2026-02-19 10:03:32.433012344 +0000 UTC m=+6155.176669793" lastFinishedPulling="2026-02-19 10:03:33.614781711 +0000 UTC m=+6156.358439160" observedRunningTime="2026-02-19 10:03:35.934366518 +0000 UTC m=+6158.678023977" watchObservedRunningTime="2026-02-19 10:03:35.94362442 +0000 UTC m=+6158.687281889" Feb 19 10:03:36 crc kubenswrapper[4780]: I0219 10:03:36.336637 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:03:36 crc kubenswrapper[4780]: I0219 10:03:36.336721 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.269890 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.271313 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-central-agent" containerID="cri-o://ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7" gracePeriod=30 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.271493 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-notification-agent" containerID="cri-o://c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99" gracePeriod=30 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.271512 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="proxy-httpd" containerID="cri-o://52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709" gracePeriod=30 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.271740 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="sg-core" containerID="cri-o://68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328" gracePeriod=30 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967154 4780 generic.go:334] "Generic (PLEG): container finished" podID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerID="52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709" exitCode=0 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967589 4780 generic.go:334] "Generic (PLEG): container finished" podID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerID="68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328" exitCode=2 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967600 4780 generic.go:334] "Generic (PLEG): container finished" podID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerID="ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7" exitCode=0 Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerDied","Data":"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709"} Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerDied","Data":"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328"} Feb 19 10:03:38 crc kubenswrapper[4780]: I0219 10:03:38.967670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerDied","Data":"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7"} Feb 19 10:03:41 crc kubenswrapper[4780]: I0219 10:03:41.736938 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 19 10:03:41 crc kubenswrapper[4780]: E0219 10:03:41.797363 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:41 crc kubenswrapper[4780]: I0219 10:03:41.821412 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:03:41 crc kubenswrapper[4780]: I0219 10:03:41.910628 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 10:03:41 crc kubenswrapper[4780]: I0219 10:03:41.910936 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="dnsmasq-dns" containerID="cri-o://13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba" gracePeriod=10 Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.452388 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.598024 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.598370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.598583 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwqng\" (UniqueName: \"kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.598646 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.598686 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.605102 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng" (OuterVolumeSpecName: "kube-api-access-nwqng") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "kube-api-access-nwqng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.687363 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.699316 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config" (OuterVolumeSpecName: "config") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.700178 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.700472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") pod \"ac612177-68e7-431e-aaa2-f21833ccaa6e\" (UID: \"ac612177-68e7-431e-aaa2-f21833ccaa6e\") " Feb 19 10:03:42 crc kubenswrapper[4780]: W0219 10:03:42.700790 4780 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ac612177-68e7-431e-aaa2-f21833ccaa6e/volumes/kubernetes.io~configmap/dns-svc Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.700832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.701441 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.701468 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.701482 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwqng\" (UniqueName: \"kubernetes.io/projected/ac612177-68e7-431e-aaa2-f21833ccaa6e-kube-api-access-nwqng\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.701497 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.708690 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac612177-68e7-431e-aaa2-f21833ccaa6e" (UID: "ac612177-68e7-431e-aaa2-f21833ccaa6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:42 crc kubenswrapper[4780]: I0219 10:03:42.803946 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac612177-68e7-431e-aaa2-f21833ccaa6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.024019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1211f1b2-3545-4ac9-8913-63ee3ed133ad","Type":"ContainerStarted","Data":"c13ae619a44fe55ed23caa6f8b2999d1bb7256fa2a97984d87cd94a8e3724a2e"} Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.027706 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerID="13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba" exitCode=0 Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.027753 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" event={"ID":"ac612177-68e7-431e-aaa2-f21833ccaa6e","Type":"ContainerDied","Data":"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba"} Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.027776 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.027795 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c857455cc-2kg4n" event={"ID":"ac612177-68e7-431e-aaa2-f21833ccaa6e","Type":"ContainerDied","Data":"71398f689b8e82d6bdea2e8bafa008e5472f113f0f2c402e43c05f4a1a6dea9a"} Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.027814 4780 scope.go:117] "RemoveContainer" containerID="13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.062540 4780 scope.go:117] "RemoveContainer" containerID="53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.082345 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.094949 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c857455cc-2kg4n"] Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.185056 4780 scope.go:117] "RemoveContainer" containerID="13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba" Feb 19 10:03:43 crc kubenswrapper[4780]: E0219 10:03:43.186298 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba\": container with ID starting with 13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba not found: ID does not exist" containerID="13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.186353 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba"} err="failed to get container status \"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba\": rpc error: code = NotFound desc = could not find container \"13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba\": container with ID starting with 13ad8c4829761e743bb13b436e95b222c57dc48bcb4a36b0d88a1dbe322a38ba not found: ID does not exist" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.186383 4780 scope.go:117] "RemoveContainer" containerID="53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e" Feb 19 10:03:43 crc kubenswrapper[4780]: E0219 10:03:43.187715 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e\": container with ID starting with 53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e not found: ID does not exist" containerID="53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.187743 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e"} err="failed to get container status \"53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e\": rpc error: code = NotFound desc = could not find container \"53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e\": container with ID starting with 53025e3c8e9803533f17986750147dae044f76b7b004d6e09c23e1b201a6473e not found: ID does not exist" Feb 19 10:03:43 crc kubenswrapper[4780]: I0219 10:03:43.957499 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" path="/var/lib/kubelet/pods/ac612177-68e7-431e-aaa2-f21833ccaa6e/volumes" Feb 19 10:03:44 crc kubenswrapper[4780]: I0219 10:03:44.046855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1211f1b2-3545-4ac9-8913-63ee3ed133ad","Type":"ContainerStarted","Data":"6bea82794b149d5824a466cba1976b77673cddbb1838cc74be32e780e35a0f03"} Feb 19 10:03:44 crc kubenswrapper[4780]: I0219 10:03:44.072830 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.76905891 podStartE2EDuration="13.072812194s" podCreationTimestamp="2026-02-19 10:03:31 +0000 UTC" firstStartedPulling="2026-02-19 10:03:32.756739807 +0000 UTC m=+6155.500397256" lastFinishedPulling="2026-02-19 10:03:42.060493091 +0000 UTC m=+6164.804150540" observedRunningTime="2026-02-19 10:03:44.06744874 +0000 UTC m=+6166.811106189" watchObservedRunningTime="2026-02-19 10:03:44.072812194 +0000 UTC m=+6166.816469643" Feb 19 10:03:44 crc kubenswrapper[4780]: I0219 10:03:44.988079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.076410 4780 generic.go:334] "Generic (PLEG): container finished" podID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerID="c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99" exitCode=0 Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.077193 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.077841 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerDied","Data":"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99"} Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.077885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3eb76453-88df-4b6a-839e-ea810a0c67fb","Type":"ContainerDied","Data":"dad71f9b7296a49078a3bb3a30924875849fd5212d4b74310924bc1374ea4c4f"} Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.077907 4780 scope.go:117] "RemoveContainer" containerID="52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079042 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079088 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phd9m\" (UniqueName: \"kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079275 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079391 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079427 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079495 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.079601 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts\") pod \"3eb76453-88df-4b6a-839e-ea810a0c67fb\" (UID: \"3eb76453-88df-4b6a-839e-ea810a0c67fb\") " Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.086962 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.087790 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.096504 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts" (OuterVolumeSpecName: "scripts") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.104474 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m" (OuterVolumeSpecName: "kube-api-access-phd9m") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "kube-api-access-phd9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.139209 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.184447 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.184484 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phd9m\" (UniqueName: \"kubernetes.io/projected/3eb76453-88df-4b6a-839e-ea810a0c67fb-kube-api-access-phd9m\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.184493 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.184504 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3eb76453-88df-4b6a-839e-ea810a0c67fb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.184513 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.189838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.233797 4780 scope.go:117] "RemoveContainer" containerID="68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.255719 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data" (OuterVolumeSpecName: "config-data") pod "3eb76453-88df-4b6a-839e-ea810a0c67fb" (UID: "3eb76453-88df-4b6a-839e-ea810a0c67fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.256421 4780 scope.go:117] "RemoveContainer" containerID="c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.285322 4780 scope.go:117] "RemoveContainer" containerID="ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.286527 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.286569 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb76453-88df-4b6a-839e-ea810a0c67fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.321531 4780 scope.go:117] "RemoveContainer" containerID="52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.323995 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709\": container with ID starting with 52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709 not found: ID does not exist" containerID="52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.324060 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709"} err="failed to get container status \"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709\": rpc error: code = NotFound desc = could not find container \"52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709\": container with ID starting with 52649de5ebccb08f314bf016b3b53ed0660be516b370d15a6ff0fe3abc12e709 not found: ID does not exist" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.324100 4780 scope.go:117] "RemoveContainer" containerID="68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.324754 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328\": container with ID starting with 68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328 not found: ID does not exist" containerID="68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.324789 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328"} err="failed to get container status \"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328\": rpc error: code = NotFound desc = could not find container \"68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328\": container with ID starting with 68784f33b1137a3842c86fefc4e36a4225b0ec2eddb5ddb7d27ef33c6574c328 not found: ID does not exist" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.324811 4780 scope.go:117] "RemoveContainer" containerID="c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.325254 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99\": container with ID starting with c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99 not found: ID does not exist" containerID="c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.325303 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99"} err="failed to get container status \"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99\": rpc error: code = NotFound desc = could not find container \"c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99\": container with ID starting with c7e4cd056f32a65ad8746135a36a5ec825c8c0e072c60bfcb719560652e8ec99 not found: ID does not exist" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.325361 4780 scope.go:117] "RemoveContainer" containerID="ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.325629 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7\": container with ID starting with ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7 not found: ID does not exist" containerID="ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.325669 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7"} err="failed to get container status \"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7\": rpc error: code = NotFound desc = could not find container \"ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7\": container with ID starting with ecfa7280eeecce93649429d8e1dcb9d573152a8b0f84166a0008f592bdad26b7 not found: ID does not exist" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.642055 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.662837 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681149 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681758 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-central-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681784 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-central-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681801 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="proxy-httpd" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681810 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="proxy-httpd" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681837 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-notification-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681845 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-notification-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681857 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="sg-core" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681864 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="sg-core" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681885 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="init" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681893 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="init" Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.681913 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="dnsmasq-dns" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.681921 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="dnsmasq-dns" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.682194 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-central-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.682229 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="ceilometer-notification-agent" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.682242 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac612177-68e7-431e-aaa2-f21833ccaa6e" containerName="dnsmasq-dns" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.682253 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="proxy-httpd" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.682268 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" containerName="sg-core" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.684792 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.688728 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.688903 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.693041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.798106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.798705 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgxsg\" (UniqueName: \"kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.798903 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.798947 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.799037 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.799082 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.799110 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.897269 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4780]: E0219 10:03:45.898301 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-rgxsg log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="785b5604-93c8-4a59-9c4a-e675dba2af48" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.900867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgxsg\" (UniqueName: \"kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901040 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901076 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901164 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901217 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.901947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.905877 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.905878 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.906149 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.911825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.925054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgxsg\" (UniqueName: \"kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg\") pod \"ceilometer-0\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " pod="openstack/ceilometer-0" Feb 19 10:03:45 crc kubenswrapper[4780]: I0219 10:03:45.952480 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb76453-88df-4b6a-839e-ea810a0c67fb" path="/var/lib/kubelet/pods/3eb76453-88df-4b6a-839e-ea810a0c67fb/volumes" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.087791 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.102551 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.207960 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208064 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208117 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208168 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgxsg\" (UniqueName: \"kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208353 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.208497 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd\") pod \"785b5604-93c8-4a59-9c4a-e675dba2af48\" (UID: \"785b5604-93c8-4a59-9c4a-e675dba2af48\") " Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.209212 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.209408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.214461 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.215340 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg" (OuterVolumeSpecName: "kube-api-access-rgxsg") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "kube-api-access-rgxsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.216848 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data" (OuterVolumeSpecName: "config-data") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.218339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.218907 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts" (OuterVolumeSpecName: "scripts") pod "785b5604-93c8-4a59-9c4a-e675dba2af48" (UID: "785b5604-93c8-4a59-9c4a-e675dba2af48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311873 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311918 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311927 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311936 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311946 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785b5604-93c8-4a59-9c4a-e675dba2af48-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311957 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/785b5604-93c8-4a59-9c4a-e675dba2af48-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4780]: I0219 10:03:46.311966 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgxsg\" (UniqueName: \"kubernetes.io/projected/785b5604-93c8-4a59-9c4a-e675dba2af48-kube-api-access-rgxsg\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.097821 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.160068 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.187301 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.197675 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.201690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.204866 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.206106 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.207869 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.333515 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.333587 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.333613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwqp\" (UniqueName: \"kubernetes.io/projected/817f9599-e2dd-4250-998f-bbd58105c51c-kube-api-access-5nwqp\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.333852 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-config-data\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.333942 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-log-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.334022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-scripts\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.334295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-run-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436783 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwqp\" (UniqueName: \"kubernetes.io/projected/817f9599-e2dd-4250-998f-bbd58105c51c-kube-api-access-5nwqp\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436874 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-config-data\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-log-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.436953 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-scripts\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.437014 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-run-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.437674 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-run-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.439070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817f9599-e2dd-4250-998f-bbd58105c51c-log-httpd\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.444892 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-scripts\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.445151 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.445727 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.451896 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817f9599-e2dd-4250-998f-bbd58105c51c-config-data\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.457010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwqp\" (UniqueName: \"kubernetes.io/projected/817f9599-e2dd-4250-998f-bbd58105c51c-kube-api-access-5nwqp\") pod \"ceilometer-0\" (UID: \"817f9599-e2dd-4250-998f-bbd58105c51c\") " pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.525896 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:47 crc kubenswrapper[4780]: I0219 10:03:47.957960 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785b5604-93c8-4a59-9c4a-e675dba2af48" path="/var/lib/kubelet/pods/785b5604-93c8-4a59-9c4a-e675dba2af48/volumes" Feb 19 10:03:48 crc kubenswrapper[4780]: I0219 10:03:48.093740 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:48 crc kubenswrapper[4780]: I0219 10:03:48.112832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"817f9599-e2dd-4250-998f-bbd58105c51c","Type":"ContainerStarted","Data":"407b9fe182221d17e5202ebe504f7a959046c9ec012828eef5c268de4a65edee"} Feb 19 10:03:49 crc kubenswrapper[4780]: I0219 10:03:49.123836 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"817f9599-e2dd-4250-998f-bbd58105c51c","Type":"ContainerStarted","Data":"cf3bc14f7038ecef17a80e939d65f506606a4c97f96c6d74bb5bd081dad46f70"} Feb 19 10:03:50 crc kubenswrapper[4780]: I0219 10:03:50.148216 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"817f9599-e2dd-4250-998f-bbd58105c51c","Type":"ContainerStarted","Data":"57d0192dd666d51efa74c7220ac79d041cc28ffab6d3d459ad9d62d83696e26b"} Feb 19 10:03:51 crc kubenswrapper[4780]: I0219 10:03:51.169328 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"817f9599-e2dd-4250-998f-bbd58105c51c","Type":"ContainerStarted","Data":"a4bb2b7d14fd33af2b2e93797f920b6ff9d6e7576cda59ac7d62e5235f9a7e06"} Feb 19 10:03:51 crc kubenswrapper[4780]: I0219 10:03:51.792250 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 19 10:03:52 crc kubenswrapper[4780]: E0219 10:03:52.124545 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde788eb3_b2f0_463a_b6ed_1ebc86b4a4f0.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:53 crc kubenswrapper[4780]: I0219 10:03:53.602937 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 19 10:03:53 crc kubenswrapper[4780]: I0219 10:03:53.790001 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 19 10:03:54 crc kubenswrapper[4780]: I0219 10:03:54.213394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"817f9599-e2dd-4250-998f-bbd58105c51c","Type":"ContainerStarted","Data":"49cc3126f936448cc55c434fd5bd850253b5e4a1d3b57d08359ec4f7bddc58cc"} Feb 19 10:03:54 crc kubenswrapper[4780]: I0219 10:03:54.215433 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:54 crc kubenswrapper[4780]: I0219 10:03:54.257907 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.24500976 podStartE2EDuration="7.257885203s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="2026-02-19 10:03:48.100001764 +0000 UTC m=+6170.843659223" lastFinishedPulling="2026-02-19 10:03:53.112877217 +0000 UTC m=+6175.856534666" observedRunningTime="2026-02-19 10:03:54.245643376 +0000 UTC m=+6176.989300825" watchObservedRunningTime="2026-02-19 10:03:54.257885203 +0000 UTC m=+6177.001542652" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.668470 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.673291 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.686148 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.733344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.734254 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkq54\" (UniqueName: \"kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.734502 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.837882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkq54\" (UniqueName: \"kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.838209 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.838394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.838877 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.838929 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:58 crc kubenswrapper[4780]: I0219 10:03:58.872905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkq54\" (UniqueName: \"kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54\") pod \"certified-operators-vl677\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:59 crc kubenswrapper[4780]: I0219 10:03:59.015646 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:03:59 crc kubenswrapper[4780]: I0219 10:03:59.543538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:04:00 crc kubenswrapper[4780]: I0219 10:04:00.293985 4780 generic.go:334] "Generic (PLEG): container finished" podID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerID="d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f" exitCode=0 Feb 19 10:04:00 crc kubenswrapper[4780]: I0219 10:04:00.294094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerDied","Data":"d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f"} Feb 19 10:04:00 crc kubenswrapper[4780]: I0219 10:04:00.294340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerStarted","Data":"ec8efa6f6a7c9d19310fcc25b80022c74ef893e861dd573e2e78b7d774bce2bd"} Feb 19 10:04:01 crc kubenswrapper[4780]: I0219 10:04:01.308398 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerStarted","Data":"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb"} Feb 19 10:04:03 crc kubenswrapper[4780]: I0219 10:04:03.327797 4780 generic.go:334] "Generic (PLEG): container finished" podID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerID="004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb" exitCode=0 Feb 19 10:04:03 crc kubenswrapper[4780]: I0219 10:04:03.327899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerDied","Data":"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb"} Feb 19 10:04:03 crc kubenswrapper[4780]: I0219 10:04:03.714764 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 19 10:04:04 crc kubenswrapper[4780]: I0219 10:04:04.344775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerStarted","Data":"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117"} Feb 19 10:04:06 crc kubenswrapper[4780]: I0219 10:04:06.337054 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:04:06 crc kubenswrapper[4780]: I0219 10:04:06.338349 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:04:06 crc kubenswrapper[4780]: I0219 10:04:06.338444 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:04:06 crc kubenswrapper[4780]: I0219 10:04:06.339617 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:04:06 crc kubenswrapper[4780]: I0219 10:04:06.339701 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593" gracePeriod=600 Feb 19 10:04:07 crc kubenswrapper[4780]: I0219 10:04:07.386275 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593" exitCode=0 Feb 19 10:04:07 crc kubenswrapper[4780]: I0219 10:04:07.386737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593"} Feb 19 10:04:07 crc kubenswrapper[4780]: I0219 10:04:07.386786 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987"} Feb 19 10:04:07 crc kubenswrapper[4780]: I0219 10:04:07.386822 4780 scope.go:117] "RemoveContainer" containerID="e4c2feb01a715d152e4a09b9fb3a018a5f4bfdea6d45e100a05df3c9ec1b91df" Feb 19 10:04:07 crc kubenswrapper[4780]: I0219 10:04:07.411567 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vl677" podStartSLOduration=5.965136648 podStartE2EDuration="9.411544852s" podCreationTimestamp="2026-02-19 10:03:58 +0000 UTC" firstStartedPulling="2026-02-19 10:04:00.296514494 +0000 UTC m=+6183.040171943" lastFinishedPulling="2026-02-19 10:04:03.742922698 +0000 UTC m=+6186.486580147" observedRunningTime="2026-02-19 10:04:04.367481041 +0000 UTC m=+6187.111138490" watchObservedRunningTime="2026-02-19 10:04:07.411544852 +0000 UTC m=+6190.155202311" Feb 19 10:04:09 crc kubenswrapper[4780]: I0219 10:04:09.017665 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:09 crc kubenswrapper[4780]: I0219 10:04:09.018166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:10 crc kubenswrapper[4780]: I0219 10:04:10.069023 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vl677" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="registry-server" probeResult="failure" output=< Feb 19 10:04:10 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:04:10 crc kubenswrapper[4780]: > Feb 19 10:04:17 crc kubenswrapper[4780]: I0219 10:04:17.539214 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:04:19 crc kubenswrapper[4780]: I0219 10:04:19.077790 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:19 crc kubenswrapper[4780]: I0219 10:04:19.131564 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:22 crc kubenswrapper[4780]: I0219 10:04:22.634231 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:04:22 crc kubenswrapper[4780]: I0219 10:04:22.635299 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vl677" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="registry-server" containerID="cri-o://89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117" gracePeriod=2 Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.225246 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.321715 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities\") pod \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.321861 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content\") pod \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.321987 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkq54\" (UniqueName: \"kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54\") pod \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\" (UID: \"341e6a90-f9e6-4efb-b28b-5c800f32ee47\") " Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.322902 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities" (OuterVolumeSpecName: "utilities") pod "341e6a90-f9e6-4efb-b28b-5c800f32ee47" (UID: "341e6a90-f9e6-4efb-b28b-5c800f32ee47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.329600 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54" (OuterVolumeSpecName: "kube-api-access-pkq54") pod "341e6a90-f9e6-4efb-b28b-5c800f32ee47" (UID: "341e6a90-f9e6-4efb-b28b-5c800f32ee47"). InnerVolumeSpecName "kube-api-access-pkq54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.387937 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "341e6a90-f9e6-4efb-b28b-5c800f32ee47" (UID: "341e6a90-f9e6-4efb-b28b-5c800f32ee47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.425000 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.425034 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkq54\" (UniqueName: \"kubernetes.io/projected/341e6a90-f9e6-4efb-b28b-5c800f32ee47-kube-api-access-pkq54\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.425044 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341e6a90-f9e6-4efb-b28b-5c800f32ee47-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.558280 4780 generic.go:334] "Generic (PLEG): container finished" podID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerID="89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117" exitCode=0 Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.558343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerDied","Data":"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117"} Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.558392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl677" event={"ID":"341e6a90-f9e6-4efb-b28b-5c800f32ee47","Type":"ContainerDied","Data":"ec8efa6f6a7c9d19310fcc25b80022c74ef893e861dd573e2e78b7d774bce2bd"} Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.558417 4780 scope.go:117] "RemoveContainer" containerID="89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.558974 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl677" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.593648 4780 scope.go:117] "RemoveContainer" containerID="004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.598241 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.608554 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vl677"] Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.654607 4780 scope.go:117] "RemoveContainer" containerID="d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.676477 4780 scope.go:117] "RemoveContainer" containerID="89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117" Feb 19 10:04:23 crc kubenswrapper[4780]: E0219 10:04:23.676927 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117\": container with ID starting with 89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117 not found: ID does not exist" containerID="89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.676981 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117"} err="failed to get container status \"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117\": rpc error: code = NotFound desc = could not find container \"89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117\": container with ID starting with 89b54d47d8ee57926690f800be7ee8738667704921e02a577662a6f637514117 not found: ID does not exist" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.677016 4780 scope.go:117] "RemoveContainer" containerID="004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb" Feb 19 10:04:23 crc kubenswrapper[4780]: E0219 10:04:23.677537 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb\": container with ID starting with 004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb not found: ID does not exist" containerID="004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.677567 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb"} err="failed to get container status \"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb\": rpc error: code = NotFound desc = could not find container \"004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb\": container with ID starting with 004f10004236ee7d5444314e75e2ab6d5ffcfbc9730561240ebae672f0b6ddbb not found: ID does not exist" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.677587 4780 scope.go:117] "RemoveContainer" containerID="d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f" Feb 19 10:04:23 crc kubenswrapper[4780]: E0219 10:04:23.677916 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f\": container with ID starting with d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f not found: ID does not exist" containerID="d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.677977 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f"} err="failed to get container status \"d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f\": rpc error: code = NotFound desc = could not find container \"d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f\": container with ID starting with d60893386b769ce4ed0728e404e5569808fcf5c81b4d2f6b9fd4b3a6da8a2a6f not found: ID does not exist" Feb 19 10:04:23 crc kubenswrapper[4780]: I0219 10:04:23.950698 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" path="/var/lib/kubelet/pods/341e6a90-f9e6-4efb-b28b-5c800f32ee47/volumes" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.705623 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:04:46 crc kubenswrapper[4780]: E0219 10:04:46.707018 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="extract-utilities" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.707059 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="extract-utilities" Feb 19 10:04:46 crc kubenswrapper[4780]: E0219 10:04:46.707090 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="registry-server" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.707097 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="registry-server" Feb 19 10:04:46 crc kubenswrapper[4780]: E0219 10:04:46.707115 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="extract-content" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.707121 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="extract-content" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.707356 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="341e6a90-f9e6-4efb-b28b-5c800f32ee47" containerName="registry-server" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.708557 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.712726 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.723625 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911228 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911286 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs7m8\" (UniqueName: \"kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:46 crc kubenswrapper[4780]: I0219 10:04:46.911659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013580 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013643 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013690 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013855 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013913 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs7m8\" (UniqueName: \"kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.013957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.014548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.014691 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.015560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.015630 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.015705 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.034973 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs7m8\" (UniqueName: \"kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8\") pod \"dnsmasq-dns-6bbb548bd7-2z9cz\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.046811 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:47 crc kubenswrapper[4780]: I0219 10:04:47.537053 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:04:48 crc kubenswrapper[4780]: I0219 10:04:48.018224 4780 generic.go:334] "Generic (PLEG): container finished" podID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerID="f02423360f2c45e6be2ab27bc2dbc56e0f928e0373c4d635a2cad65239de6418" exitCode=0 Feb 19 10:04:48 crc kubenswrapper[4780]: I0219 10:04:48.018422 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" event={"ID":"d786e29d-9d48-4663-ba48-d94468ec8e1e","Type":"ContainerDied","Data":"f02423360f2c45e6be2ab27bc2dbc56e0f928e0373c4d635a2cad65239de6418"} Feb 19 10:04:48 crc kubenswrapper[4780]: I0219 10:04:48.018607 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" event={"ID":"d786e29d-9d48-4663-ba48-d94468ec8e1e","Type":"ContainerStarted","Data":"588b784201d0a16318aad4962dfff5b89a2ecc88508d21747ae56e53d2346496"} Feb 19 10:04:49 crc kubenswrapper[4780]: I0219 10:04:49.029426 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" event={"ID":"d786e29d-9d48-4663-ba48-d94468ec8e1e","Type":"ContainerStarted","Data":"1c5c9aa98ed56c0d22314a8306549d2bfe2bcd4362b7788c8068370c95455947"} Feb 19 10:04:49 crc kubenswrapper[4780]: I0219 10:04:49.031148 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:49 crc kubenswrapper[4780]: I0219 10:04:49.064096 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" podStartSLOduration=3.064073422 podStartE2EDuration="3.064073422s" podCreationTimestamp="2026-02-19 10:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:49.055423346 +0000 UTC m=+6231.799080835" watchObservedRunningTime="2026-02-19 10:04:49.064073422 +0000 UTC m=+6231.807730881" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.049059 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.138231 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.138953 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="dnsmasq-dns" containerID="cri-o://c835cdc8838d5ea2fd108dcbd65813f3cd4a17244dfe396a9af7315775b13ae1" gracePeriod=10 Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.369475 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f79c7847-d2mxn"] Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.371979 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.404474 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f79c7847-d2mxn"] Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-config\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473235 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7qx\" (UniqueName: \"kubernetes.io/projected/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-kube-api-access-9n7qx\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-nb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473368 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-dns-svc\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-sb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.473652 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-openstack-cell1\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.575846 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-dns-svc\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.575928 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-sb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.575985 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-openstack-cell1\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.576108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-config\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.576211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7qx\" (UniqueName: \"kubernetes.io/projected/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-kube-api-access-9n7qx\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.576249 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-nb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.577563 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-sb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.577590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-openstack-cell1\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.577836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-ovsdbserver-nb\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.577971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-config\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.578199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-dns-svc\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.621010 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7qx\" (UniqueName: \"kubernetes.io/projected/19b07627-19bc-4e68-8ff4-e2d70d76b4a2-kube-api-access-9n7qx\") pod \"dnsmasq-dns-9f79c7847-d2mxn\" (UID: \"19b07627-19bc-4e68-8ff4-e2d70d76b4a2\") " pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:57 crc kubenswrapper[4780]: I0219 10:04:57.702663 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.119983 4780 generic.go:334] "Generic (PLEG): container finished" podID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerID="c835cdc8838d5ea2fd108dcbd65813f3cd4a17244dfe396a9af7315775b13ae1" exitCode=0 Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.120209 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" event={"ID":"bc04cb27-cdce-4b21-9bf9-6a1054e56c03","Type":"ContainerDied","Data":"c835cdc8838d5ea2fd108dcbd65813f3cd4a17244dfe396a9af7315775b13ae1"} Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.227849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f79c7847-d2mxn"] Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.428703 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.605849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb\") pod \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.605935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config\") pod \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.606130 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc\") pod \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.606191 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb\") pod \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.606233 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2wt5\" (UniqueName: \"kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5\") pod \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\" (UID: \"bc04cb27-cdce-4b21-9bf9-6a1054e56c03\") " Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.621365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5" (OuterVolumeSpecName: "kube-api-access-l2wt5") pod "bc04cb27-cdce-4b21-9bf9-6a1054e56c03" (UID: "bc04cb27-cdce-4b21-9bf9-6a1054e56c03"). InnerVolumeSpecName "kube-api-access-l2wt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.714626 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2wt5\" (UniqueName: \"kubernetes.io/projected/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-kube-api-access-l2wt5\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.793212 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc04cb27-cdce-4b21-9bf9-6a1054e56c03" (UID: "bc04cb27-cdce-4b21-9bf9-6a1054e56c03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.805612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config" (OuterVolumeSpecName: "config") pod "bc04cb27-cdce-4b21-9bf9-6a1054e56c03" (UID: "bc04cb27-cdce-4b21-9bf9-6a1054e56c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.817334 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.817372 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.831608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc04cb27-cdce-4b21-9bf9-6a1054e56c03" (UID: "bc04cb27-cdce-4b21-9bf9-6a1054e56c03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.857531 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc04cb27-cdce-4b21-9bf9-6a1054e56c03" (UID: "bc04cb27-cdce-4b21-9bf9-6a1054e56c03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.919812 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:58 crc kubenswrapper[4780]: I0219 10:04:58.919852 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc04cb27-cdce-4b21-9bf9-6a1054e56c03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.133393 4780 generic.go:334] "Generic (PLEG): container finished" podID="19b07627-19bc-4e68-8ff4-e2d70d76b4a2" containerID="f1d0c722a2a59c31ccf96963817874d35bb4fdf1ef0ec24a8f0843dfe71a5fbd" exitCode=0 Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.133849 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" event={"ID":"19b07627-19bc-4e68-8ff4-e2d70d76b4a2","Type":"ContainerDied","Data":"f1d0c722a2a59c31ccf96963817874d35bb4fdf1ef0ec24a8f0843dfe71a5fbd"} Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.133887 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" event={"ID":"19b07627-19bc-4e68-8ff4-e2d70d76b4a2","Type":"ContainerStarted","Data":"efba15374ff9dedd3d15e8ff42b88633bba4590bcca9ca4ca2ee4130a33f11a1"} Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.137945 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" event={"ID":"bc04cb27-cdce-4b21-9bf9-6a1054e56c03","Type":"ContainerDied","Data":"b2dac0b1d254e947020085eef072e70ca0aee531b7ba6061cf4334714f7d1cac"} Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.138008 4780 scope.go:117] "RemoveContainer" containerID="c835cdc8838d5ea2fd108dcbd65813f3cd4a17244dfe396a9af7315775b13ae1" Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.138332 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558f8558d5-g2glz" Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.165630 4780 scope.go:117] "RemoveContainer" containerID="e8f14149a4f0a4270a08e503853374ae250b6bdc22be58b6a4f7d3df4188f847" Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.190369 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.202070 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-558f8558d5-g2glz"] Feb 19 10:04:59 crc kubenswrapper[4780]: I0219 10:04:59.957576 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" path="/var/lib/kubelet/pods/bc04cb27-cdce-4b21-9bf9-6a1054e56c03/volumes" Feb 19 10:05:00 crc kubenswrapper[4780]: I0219 10:05:00.154069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" event={"ID":"19b07627-19bc-4e68-8ff4-e2d70d76b4a2","Type":"ContainerStarted","Data":"5dbd7ef7717fb10477db28106c7378e77435f5529c1cd5430e9257baf0b00dca"} Feb 19 10:05:00 crc kubenswrapper[4780]: I0219 10:05:00.154312 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:05:00 crc kubenswrapper[4780]: I0219 10:05:00.177612 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" podStartSLOduration=3.177589531 podStartE2EDuration="3.177589531s" podCreationTimestamp="2026-02-19 10:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:00.174990916 +0000 UTC m=+6242.918648365" watchObservedRunningTime="2026-02-19 10:05:00.177589531 +0000 UTC m=+6242.921246980" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.941265 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9"] Feb 19 10:05:02 crc kubenswrapper[4780]: E0219 10:05:02.942543 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="dnsmasq-dns" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.942560 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="dnsmasq-dns" Feb 19 10:05:02 crc kubenswrapper[4780]: E0219 10:05:02.942587 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="init" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.942597 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="init" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.942962 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc04cb27-cdce-4b21-9bf9-6a1054e56c03" containerName="dnsmasq-dns" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.944157 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.948430 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.948619 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.949066 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.949555 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:05:02 crc kubenswrapper[4780]: I0219 10:05:02.975346 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9"] Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.125596 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.125664 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.125723 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.125855 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.126051 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhhv\" (UniqueName: \"kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.228160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.228272 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.228321 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhhv\" (UniqueName: \"kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.228486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.228513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.234215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.234553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.234620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.235442 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.254851 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhhv\" (UniqueName: \"kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.267434 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:03 crc kubenswrapper[4780]: I0219 10:05:03.847090 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9"] Feb 19 10:05:04 crc kubenswrapper[4780]: I0219 10:05:04.206689 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" event={"ID":"bb82fc50-146a-4618-9e41-1372bf42a5d4","Type":"ContainerStarted","Data":"29c16e0f21ff6256dcaf5e596fbe1fb37419813b57a3837c535b37b5a3db499b"} Feb 19 10:05:07 crc kubenswrapper[4780]: I0219 10:05:07.705532 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f79c7847-d2mxn" Feb 19 10:05:07 crc kubenswrapper[4780]: I0219 10:05:07.780983 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:05:07 crc kubenswrapper[4780]: I0219 10:05:07.782199 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="dnsmasq-dns" containerID="cri-o://1c5c9aa98ed56c0d22314a8306549d2bfe2bcd4362b7788c8068370c95455947" gracePeriod=10 Feb 19 10:05:08 crc kubenswrapper[4780]: I0219 10:05:08.271682 4780 generic.go:334] "Generic (PLEG): container finished" podID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerID="1c5c9aa98ed56c0d22314a8306549d2bfe2bcd4362b7788c8068370c95455947" exitCode=0 Feb 19 10:05:08 crc kubenswrapper[4780]: I0219 10:05:08.271779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" event={"ID":"d786e29d-9d48-4663-ba48-d94468ec8e1e","Type":"ContainerDied","Data":"1c5c9aa98ed56c0d22314a8306549d2bfe2bcd4362b7788c8068370c95455947"} Feb 19 10:05:12 crc kubenswrapper[4780]: I0219 10:05:12.048163 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.146:5353: connect: connection refused" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.358877 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" event={"ID":"d786e29d-9d48-4663-ba48-d94468ec8e1e","Type":"ContainerDied","Data":"588b784201d0a16318aad4962dfff5b89a2ecc88508d21747ae56e53d2346496"} Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.359527 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588b784201d0a16318aad4962dfff5b89a2ecc88508d21747ae56e53d2346496" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.487359 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.536440 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.536774 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.536858 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.536995 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.537071 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs7m8\" (UniqueName: \"kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.537392 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb\") pod \"d786e29d-9d48-4663-ba48-d94468ec8e1e\" (UID: \"d786e29d-9d48-4663-ba48-d94468ec8e1e\") " Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.543636 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8" (OuterVolumeSpecName: "kube-api-access-qs7m8") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "kube-api-access-qs7m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.616312 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.622249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.625506 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.630715 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config" (OuterVolumeSpecName: "config") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.636608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d786e29d-9d48-4663-ba48-d94468ec8e1e" (UID: "d786e29d-9d48-4663-ba48-d94468ec8e1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640430 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640470 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs7m8\" (UniqueName: \"kubernetes.io/projected/d786e29d-9d48-4663-ba48-d94468ec8e1e-kube-api-access-qs7m8\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640487 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640500 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640516 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4780]: I0219 10:05:15.640529 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d786e29d-9d48-4663-ba48-d94468ec8e1e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:16 crc kubenswrapper[4780]: I0219 10:05:16.371419 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbb548bd7-2z9cz" Feb 19 10:05:16 crc kubenswrapper[4780]: I0219 10:05:16.373135 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" event={"ID":"bb82fc50-146a-4618-9e41-1372bf42a5d4","Type":"ContainerStarted","Data":"d58bc61136e990207808cb2e1275e00e4525f10fe98216bdd7a1775150f1617e"} Feb 19 10:05:16 crc kubenswrapper[4780]: I0219 10:05:16.401358 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" podStartSLOduration=3.069442111 podStartE2EDuration="14.401323872s" podCreationTimestamp="2026-02-19 10:05:02 +0000 UTC" firstStartedPulling="2026-02-19 10:05:03.857099937 +0000 UTC m=+6246.600757386" lastFinishedPulling="2026-02-19 10:05:15.188981698 +0000 UTC m=+6257.932639147" observedRunningTime="2026-02-19 10:05:16.389850475 +0000 UTC m=+6259.133507924" watchObservedRunningTime="2026-02-19 10:05:16.401323872 +0000 UTC m=+6259.144981321" Feb 19 10:05:16 crc kubenswrapper[4780]: I0219 10:05:16.425431 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:05:16 crc kubenswrapper[4780]: I0219 10:05:16.476994 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bbb548bd7-2z9cz"] Feb 19 10:05:17 crc kubenswrapper[4780]: I0219 10:05:17.961433 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" path="/var/lib/kubelet/pods/d786e29d-9d48-4663-ba48-d94468ec8e1e/volumes" Feb 19 10:05:29 crc kubenswrapper[4780]: I0219 10:05:29.502948 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb82fc50-146a-4618-9e41-1372bf42a5d4" containerID="d58bc61136e990207808cb2e1275e00e4525f10fe98216bdd7a1775150f1617e" exitCode=0 Feb 19 10:05:29 crc kubenswrapper[4780]: I0219 10:05:29.503070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" event={"ID":"bb82fc50-146a-4618-9e41-1372bf42a5d4","Type":"ContainerDied","Data":"d58bc61136e990207808cb2e1275e00e4525f10fe98216bdd7a1775150f1617e"} Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.037859 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.128288 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph\") pod \"bb82fc50-146a-4618-9e41-1372bf42a5d4\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.128418 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1\") pod \"bb82fc50-146a-4618-9e41-1372bf42a5d4\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.128504 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zhhv\" (UniqueName: \"kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv\") pod \"bb82fc50-146a-4618-9e41-1372bf42a5d4\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.128573 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory\") pod \"bb82fc50-146a-4618-9e41-1372bf42a5d4\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.128589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle\") pod \"bb82fc50-146a-4618-9e41-1372bf42a5d4\" (UID: \"bb82fc50-146a-4618-9e41-1372bf42a5d4\") " Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.134951 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph" (OuterVolumeSpecName: "ceph") pod "bb82fc50-146a-4618-9e41-1372bf42a5d4" (UID: "bb82fc50-146a-4618-9e41-1372bf42a5d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.135651 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv" (OuterVolumeSpecName: "kube-api-access-6zhhv") pod "bb82fc50-146a-4618-9e41-1372bf42a5d4" (UID: "bb82fc50-146a-4618-9e41-1372bf42a5d4"). InnerVolumeSpecName "kube-api-access-6zhhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.135937 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "bb82fc50-146a-4618-9e41-1372bf42a5d4" (UID: "bb82fc50-146a-4618-9e41-1372bf42a5d4"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.170565 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "bb82fc50-146a-4618-9e41-1372bf42a5d4" (UID: "bb82fc50-146a-4618-9e41-1372bf42a5d4"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.176159 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory" (OuterVolumeSpecName: "inventory") pod "bb82fc50-146a-4618-9e41-1372bf42a5d4" (UID: "bb82fc50-146a-4618-9e41-1372bf42a5d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.230903 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.230937 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.230950 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zhhv\" (UniqueName: \"kubernetes.io/projected/bb82fc50-146a-4618-9e41-1372bf42a5d4-kube-api-access-6zhhv\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.230959 4780 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.230969 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb82fc50-146a-4618-9e41-1372bf42a5d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.529893 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" event={"ID":"bb82fc50-146a-4618-9e41-1372bf42a5d4","Type":"ContainerDied","Data":"29c16e0f21ff6256dcaf5e596fbe1fb37419813b57a3837c535b37b5a3db499b"} Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.529969 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c16e0f21ff6256dcaf5e596fbe1fb37419813b57a3837c535b37b5a3db499b" Feb 19 10:05:31 crc kubenswrapper[4780]: I0219 10:05:31.530084 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.707731 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2"] Feb 19 10:05:40 crc kubenswrapper[4780]: E0219 10:05:40.708910 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="dnsmasq-dns" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.708928 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="dnsmasq-dns" Feb 19 10:05:40 crc kubenswrapper[4780]: E0219 10:05:40.708953 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb82fc50-146a-4618-9e41-1372bf42a5d4" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.708963 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb82fc50-146a-4618-9e41-1372bf42a5d4" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 10:05:40 crc kubenswrapper[4780]: E0219 10:05:40.708989 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="init" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.708997 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="init" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.709289 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d786e29d-9d48-4663-ba48-d94468ec8e1e" containerName="dnsmasq-dns" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.709320 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb82fc50-146a-4618-9e41-1372bf42a5d4" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.710358 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.717826 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.717873 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.717992 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.718571 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.719556 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2"] Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.776910 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxcb\" (UniqueName: \"kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.777401 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.777500 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.777574 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.777600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.879402 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.879525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.879563 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.879678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxcb\" (UniqueName: \"kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.879741 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.886102 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.886720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.887534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.900714 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxcb\" (UniqueName: \"kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:40 crc kubenswrapper[4780]: I0219 10:05:40.901751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:41 crc kubenswrapper[4780]: I0219 10:05:41.032176 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:05:41 crc kubenswrapper[4780]: I0219 10:05:41.628116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2"] Feb 19 10:05:41 crc kubenswrapper[4780]: W0219 10:05:41.630088 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4bcad9c_e8e3_4090_ac8b_015bfce05a61.slice/crio-56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03 WatchSource:0}: Error finding container 56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03: Status 404 returned error can't find the container with id 56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03 Feb 19 10:05:42 crc kubenswrapper[4780]: I0219 10:05:42.644561 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" event={"ID":"d4bcad9c-e8e3-4090-ac8b-015bfce05a61","Type":"ContainerStarted","Data":"701a54c2c20b83e88f5ae5fd1dc987af2d2060acea87443d5fb4135d64a017fc"} Feb 19 10:05:42 crc kubenswrapper[4780]: I0219 10:05:42.645070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" event={"ID":"d4bcad9c-e8e3-4090-ac8b-015bfce05a61","Type":"ContainerStarted","Data":"56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03"} Feb 19 10:05:42 crc kubenswrapper[4780]: I0219 10:05:42.679165 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" podStartSLOduration=2.267423981 podStartE2EDuration="2.679112869s" podCreationTimestamp="2026-02-19 10:05:40 +0000 UTC" firstStartedPulling="2026-02-19 10:05:41.633071683 +0000 UTC m=+6284.376729132" lastFinishedPulling="2026-02-19 10:05:42.044760571 +0000 UTC m=+6284.788418020" observedRunningTime="2026-02-19 10:05:42.662727789 +0000 UTC m=+6285.406385278" watchObservedRunningTime="2026-02-19 10:05:42.679112869 +0000 UTC m=+6285.422770358" Feb 19 10:05:58 crc kubenswrapper[4780]: I0219 10:05:58.096239 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-2xfhc"] Feb 19 10:05:58 crc kubenswrapper[4780]: I0219 10:05:58.126424 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-2xfhc"] Feb 19 10:05:59 crc kubenswrapper[4780]: I0219 10:05:59.957219 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde46373-6de9-4921-8d5a-d0231ca24aa4" path="/var/lib/kubelet/pods/bde46373-6de9-4921-8d5a-d0231ca24aa4/volumes" Feb 19 10:06:00 crc kubenswrapper[4780]: I0219 10:06:00.046955 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-c55c-account-create-update-zz228"] Feb 19 10:06:00 crc kubenswrapper[4780]: I0219 10:06:00.062159 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-c55c-account-create-update-zz228"] Feb 19 10:06:01 crc kubenswrapper[4780]: I0219 10:06:01.955157 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1987e82-c3a2-49d9-b234-06252c4b17c2" path="/var/lib/kubelet/pods/f1987e82-c3a2-49d9-b234-06252c4b17c2/volumes" Feb 19 10:06:06 crc kubenswrapper[4780]: I0219 10:06:06.041879 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-q6njm"] Feb 19 10:06:06 crc kubenswrapper[4780]: I0219 10:06:06.058353 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-q6njm"] Feb 19 10:06:06 crc kubenswrapper[4780]: I0219 10:06:06.336824 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:06:06 crc kubenswrapper[4780]: I0219 10:06:06.337425 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:06:07 crc kubenswrapper[4780]: I0219 10:06:07.044951 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-0c3c-account-create-update-ljnws"] Feb 19 10:06:07 crc kubenswrapper[4780]: I0219 10:06:07.065382 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-0c3c-account-create-update-ljnws"] Feb 19 10:06:07 crc kubenswrapper[4780]: I0219 10:06:07.967613 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100678d8-ee54-41f3-ba9b-b37a79cc7385" path="/var/lib/kubelet/pods/100678d8-ee54-41f3-ba9b-b37a79cc7385/volumes" Feb 19 10:06:07 crc kubenswrapper[4780]: I0219 10:06:07.971048 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8078e9b0-3cbc-4fc3-8305-aee96f30eadc" path="/var/lib/kubelet/pods/8078e9b0-3cbc-4fc3-8305-aee96f30eadc/volumes" Feb 19 10:06:15 crc kubenswrapper[4780]: I0219 10:06:15.214302 4780 scope.go:117] "RemoveContainer" containerID="ec5b152c9ee958b2b78e53351e3ab1db7417042b3622bab63f1db562e973f4c1" Feb 19 10:06:15 crc kubenswrapper[4780]: I0219 10:06:15.274426 4780 scope.go:117] "RemoveContainer" containerID="2bbc1941c72f26ab918e336bb78d50ba7cfe4fa8bcb0747ea4e1e479ee94b66a" Feb 19 10:06:15 crc kubenswrapper[4780]: I0219 10:06:15.331186 4780 scope.go:117] "RemoveContainer" containerID="51e012ff1ad5ff5ccc59e9bbb595ee0ad582349068d8a5f778223610a5208623" Feb 19 10:06:15 crc kubenswrapper[4780]: I0219 10:06:15.393792 4780 scope.go:117] "RemoveContainer" containerID="810a0c7581deab1ae63299efa3f63211086a9f75f805e8485a135839fef7580a" Feb 19 10:06:36 crc kubenswrapper[4780]: I0219 10:06:36.336521 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:06:36 crc kubenswrapper[4780]: I0219 10:06:36.337419 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:06:43 crc kubenswrapper[4780]: I0219 10:06:43.057065 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-ml9nw"] Feb 19 10:06:43 crc kubenswrapper[4780]: I0219 10:06:43.068690 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-ml9nw"] Feb 19 10:06:43 crc kubenswrapper[4780]: I0219 10:06:43.969748 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237df558-e233-4dd2-a360-44cdbe273c41" path="/var/lib/kubelet/pods/237df558-e233-4dd2-a360-44cdbe273c41/volumes" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.337292 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.337929 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.337980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.338908 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.338964 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" gracePeriod=600 Feb 19 10:07:06 crc kubenswrapper[4780]: E0219 10:07:06.484453 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.623449 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" exitCode=0 Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.623516 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987"} Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.623571 4780 scope.go:117] "RemoveContainer" containerID="eaba445b6162a4b57119126d86dd6757824995ccc18b63f567b6bed2e9bf9593" Feb 19 10:07:06 crc kubenswrapper[4780]: I0219 10:07:06.624772 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:07:06 crc kubenswrapper[4780]: E0219 10:07:06.625110 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:07:15 crc kubenswrapper[4780]: I0219 10:07:15.546501 4780 scope.go:117] "RemoveContainer" containerID="c425696c251937575105af05c55677a6e77f47be0c85e364329f983fab6425af" Feb 19 10:07:15 crc kubenswrapper[4780]: I0219 10:07:15.632714 4780 scope.go:117] "RemoveContainer" containerID="51fe2ec23cfc01b285e72b4a844f01f393d6a22f846136d0c4507feea2508d0e" Feb 19 10:07:21 crc kubenswrapper[4780]: I0219 10:07:21.938916 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:07:21 crc kubenswrapper[4780]: E0219 10:07:21.940339 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:07:32 crc kubenswrapper[4780]: I0219 10:07:32.938893 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:07:32 crc kubenswrapper[4780]: E0219 10:07:32.940654 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:07:45 crc kubenswrapper[4780]: I0219 10:07:45.940586 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:07:45 crc kubenswrapper[4780]: E0219 10:07:45.942181 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:00 crc kubenswrapper[4780]: I0219 10:08:00.938626 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:08:00 crc kubenswrapper[4780]: E0219 10:08:00.939581 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:14 crc kubenswrapper[4780]: I0219 10:08:14.938872 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:08:14 crc kubenswrapper[4780]: E0219 10:08:14.939976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:28 crc kubenswrapper[4780]: I0219 10:08:28.939642 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:08:28 crc kubenswrapper[4780]: E0219 10:08:28.940658 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:39 crc kubenswrapper[4780]: I0219 10:08:39.940663 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:08:39 crc kubenswrapper[4780]: E0219 10:08:39.941494 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.757885 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.762365 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.779165 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.803649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.803967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.804045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kss6\" (UniqueName: \"kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.905815 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.906253 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kss6\" (UniqueName: \"kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.906431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.906491 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.906706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:45 crc kubenswrapper[4780]: I0219 10:08:45.929695 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kss6\" (UniqueName: \"kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6\") pod \"community-operators-qvwtg\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:46 crc kubenswrapper[4780]: I0219 10:08:46.093146 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:46 crc kubenswrapper[4780]: I0219 10:08:46.715378 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:08:46 crc kubenswrapper[4780]: I0219 10:08:46.785040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerStarted","Data":"6ea4e23e7a2a4341b307f062581b05aee4f6bafc70a9b7c49a671265ed838b36"} Feb 19 10:08:47 crc kubenswrapper[4780]: I0219 10:08:47.800042 4780 generic.go:334] "Generic (PLEG): container finished" podID="1651b232-d50c-483e-9f1c-3cb929310091" containerID="9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9" exitCode=0 Feb 19 10:08:47 crc kubenswrapper[4780]: I0219 10:08:47.800236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerDied","Data":"9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9"} Feb 19 10:08:47 crc kubenswrapper[4780]: I0219 10:08:47.803564 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:08:49 crc kubenswrapper[4780]: I0219 10:08:49.828192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerStarted","Data":"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608"} Feb 19 10:08:51 crc kubenswrapper[4780]: E0219 10:08:51.179340 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1651b232_d50c_483e_9f1c_3cb929310091.slice/crio-257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:08:51 crc kubenswrapper[4780]: I0219 10:08:51.851381 4780 generic.go:334] "Generic (PLEG): container finished" podID="1651b232-d50c-483e-9f1c-3cb929310091" containerID="257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608" exitCode=0 Feb 19 10:08:51 crc kubenswrapper[4780]: I0219 10:08:51.851492 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerDied","Data":"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608"} Feb 19 10:08:51 crc kubenswrapper[4780]: I0219 10:08:51.938969 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:08:51 crc kubenswrapper[4780]: E0219 10:08:51.939368 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:08:52 crc kubenswrapper[4780]: I0219 10:08:52.867917 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerStarted","Data":"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d"} Feb 19 10:08:52 crc kubenswrapper[4780]: I0219 10:08:52.898024 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvwtg" podStartSLOduration=3.162033963 podStartE2EDuration="7.897989057s" podCreationTimestamp="2026-02-19 10:08:45 +0000 UTC" firstStartedPulling="2026-02-19 10:08:47.803098908 +0000 UTC m=+6470.546756357" lastFinishedPulling="2026-02-19 10:08:52.539053992 +0000 UTC m=+6475.282711451" observedRunningTime="2026-02-19 10:08:52.89251665 +0000 UTC m=+6475.636174099" watchObservedRunningTime="2026-02-19 10:08:52.897989057 +0000 UTC m=+6475.641646516" Feb 19 10:08:56 crc kubenswrapper[4780]: I0219 10:08:56.094350 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:56 crc kubenswrapper[4780]: I0219 10:08:56.095275 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:08:56 crc kubenswrapper[4780]: I0219 10:08:56.150675 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:09:06 crc kubenswrapper[4780]: I0219 10:09:06.151519 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:09:06 crc kubenswrapper[4780]: I0219 10:09:06.225209 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:09:06 crc kubenswrapper[4780]: I0219 10:09:06.939425 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:09:06 crc kubenswrapper[4780]: E0219 10:09:06.940431 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.018706 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qvwtg" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="registry-server" containerID="cri-o://4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d" gracePeriod=2 Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.584530 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.770468 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content\") pod \"1651b232-d50c-483e-9f1c-3cb929310091\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.770947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities\") pod \"1651b232-d50c-483e-9f1c-3cb929310091\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.771290 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kss6\" (UniqueName: \"kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6\") pod \"1651b232-d50c-483e-9f1c-3cb929310091\" (UID: \"1651b232-d50c-483e-9f1c-3cb929310091\") " Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.772332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities" (OuterVolumeSpecName: "utilities") pod "1651b232-d50c-483e-9f1c-3cb929310091" (UID: "1651b232-d50c-483e-9f1c-3cb929310091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.779292 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6" (OuterVolumeSpecName: "kube-api-access-5kss6") pod "1651b232-d50c-483e-9f1c-3cb929310091" (UID: "1651b232-d50c-483e-9f1c-3cb929310091"). InnerVolumeSpecName "kube-api-access-5kss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.826333 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1651b232-d50c-483e-9f1c-3cb929310091" (UID: "1651b232-d50c-483e-9f1c-3cb929310091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.874016 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kss6\" (UniqueName: \"kubernetes.io/projected/1651b232-d50c-483e-9f1c-3cb929310091-kube-api-access-5kss6\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.874055 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:07 crc kubenswrapper[4780]: I0219 10:09:07.874065 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1651b232-d50c-483e-9f1c-3cb929310091-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.033400 4780 generic.go:334] "Generic (PLEG): container finished" podID="1651b232-d50c-483e-9f1c-3cb929310091" containerID="4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d" exitCode=0 Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.033487 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvwtg" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.033475 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerDied","Data":"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d"} Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.033652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvwtg" event={"ID":"1651b232-d50c-483e-9f1c-3cb929310091","Type":"ContainerDied","Data":"6ea4e23e7a2a4341b307f062581b05aee4f6bafc70a9b7c49a671265ed838b36"} Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.033719 4780 scope.go:117] "RemoveContainer" containerID="4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.065519 4780 scope.go:117] "RemoveContainer" containerID="257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.065650 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.077277 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qvwtg"] Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.097568 4780 scope.go:117] "RemoveContainer" containerID="9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.162215 4780 scope.go:117] "RemoveContainer" containerID="4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d" Feb 19 10:09:08 crc kubenswrapper[4780]: E0219 10:09:08.163768 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d\": container with ID starting with 4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d not found: ID does not exist" containerID="4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.163827 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d"} err="failed to get container status \"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d\": rpc error: code = NotFound desc = could not find container \"4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d\": container with ID starting with 4a9af1f34940b3db6cacd4a450d8f563fd5c497c8010768d0b64b94f9fbe271d not found: ID does not exist" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.163884 4780 scope.go:117] "RemoveContainer" containerID="257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608" Feb 19 10:09:08 crc kubenswrapper[4780]: E0219 10:09:08.165155 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608\": container with ID starting with 257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608 not found: ID does not exist" containerID="257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.165195 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608"} err="failed to get container status \"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608\": rpc error: code = NotFound desc = could not find container \"257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608\": container with ID starting with 257183734a822108559c72c0ac305e4113afd090f2b969b1a35d3a334830f608 not found: ID does not exist" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.165217 4780 scope.go:117] "RemoveContainer" containerID="9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9" Feb 19 10:09:08 crc kubenswrapper[4780]: E0219 10:09:08.165614 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9\": container with ID starting with 9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9 not found: ID does not exist" containerID="9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9" Feb 19 10:09:08 crc kubenswrapper[4780]: I0219 10:09:08.165640 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9"} err="failed to get container status \"9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9\": rpc error: code = NotFound desc = could not find container \"9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9\": container with ID starting with 9c00b8b87612160dcc66a33abda7e702d21dcd005e06c858c953410407774db9 not found: ID does not exist" Feb 19 10:09:09 crc kubenswrapper[4780]: I0219 10:09:09.976835 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1651b232-d50c-483e-9f1c-3cb929310091" path="/var/lib/kubelet/pods/1651b232-d50c-483e-9f1c-3cb929310091/volumes" Feb 19 10:09:18 crc kubenswrapper[4780]: I0219 10:09:18.938706 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:09:18 crc kubenswrapper[4780]: E0219 10:09:18.939826 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:09:21 crc kubenswrapper[4780]: I0219 10:09:21.060460 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-fccp7"] Feb 19 10:09:21 crc kubenswrapper[4780]: I0219 10:09:21.079312 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-fccp7"] Feb 19 10:09:21 crc kubenswrapper[4780]: I0219 10:09:21.967910 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27352a2d-b7f7-4056-9ccb-b9947c758e3c" path="/var/lib/kubelet/pods/27352a2d-b7f7-4056-9ccb-b9947c758e3c/volumes" Feb 19 10:09:22 crc kubenswrapper[4780]: I0219 10:09:22.041689 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-7c52-account-create-update-2hnl9"] Feb 19 10:09:22 crc kubenswrapper[4780]: I0219 10:09:22.059220 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-7c52-account-create-update-2hnl9"] Feb 19 10:09:23 crc kubenswrapper[4780]: I0219 10:09:23.955681 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614517db-1826-4ac5-baaf-b1348e466574" path="/var/lib/kubelet/pods/614517db-1826-4ac5-baaf-b1348e466574/volumes" Feb 19 10:09:33 crc kubenswrapper[4780]: I0219 10:09:33.939677 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:09:33 crc kubenswrapper[4780]: E0219 10:09:33.940663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:09:38 crc kubenswrapper[4780]: I0219 10:09:38.041430 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zmgww"] Feb 19 10:09:38 crc kubenswrapper[4780]: I0219 10:09:38.053718 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zmgww"] Feb 19 10:09:39 crc kubenswrapper[4780]: I0219 10:09:39.957010 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d6b6ef-a763-4a2e-ba5b-844d3095ca19" path="/var/lib/kubelet/pods/49d6b6ef-a763-4a2e-ba5b-844d3095ca19/volumes" Feb 19 10:09:47 crc kubenswrapper[4780]: I0219 10:09:47.948214 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:09:47 crc kubenswrapper[4780]: E0219 10:09:47.949403 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:09:59 crc kubenswrapper[4780]: I0219 10:09:59.939363 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:09:59 crc kubenswrapper[4780]: E0219 10:09:59.940510 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:10:12 crc kubenswrapper[4780]: I0219 10:10:12.938540 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:10:12 crc kubenswrapper[4780]: E0219 10:10:12.940758 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:10:15 crc kubenswrapper[4780]: I0219 10:10:15.860193 4780 scope.go:117] "RemoveContainer" containerID="ba2d0ea12088847876cedb52fa3deb0d8d3a8e9131bffcd9d4bfafaad5983e9e" Feb 19 10:10:15 crc kubenswrapper[4780]: I0219 10:10:15.906001 4780 scope.go:117] "RemoveContainer" containerID="f3db4332ef506edb9d029ab778420de28631dd77c8c719f642de17d8db5358e0" Feb 19 10:10:15 crc kubenswrapper[4780]: I0219 10:10:15.976299 4780 scope.go:117] "RemoveContainer" containerID="6bd77726cca6241b9ea9834d9580a6911c0f404f727819faba61a3a6a7ba19e3" Feb 19 10:10:24 crc kubenswrapper[4780]: I0219 10:10:24.939210 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:10:24 crc kubenswrapper[4780]: E0219 10:10:24.939998 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:10:35 crc kubenswrapper[4780]: I0219 10:10:35.939399 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:10:35 crc kubenswrapper[4780]: E0219 10:10:35.940645 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:10:50 crc kubenswrapper[4780]: I0219 10:10:50.939355 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:10:50 crc kubenswrapper[4780]: E0219 10:10:50.940402 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:11:01 crc kubenswrapper[4780]: I0219 10:11:01.938491 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:11:01 crc kubenswrapper[4780]: E0219 10:11:01.939453 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:11:16 crc kubenswrapper[4780]: I0219 10:11:16.126049 4780 scope.go:117] "RemoveContainer" containerID="f02423360f2c45e6be2ab27bc2dbc56e0f928e0373c4d635a2cad65239de6418" Feb 19 10:11:16 crc kubenswrapper[4780]: I0219 10:11:16.158152 4780 scope.go:117] "RemoveContainer" containerID="1c5c9aa98ed56c0d22314a8306549d2bfe2bcd4362b7788c8068370c95455947" Feb 19 10:11:16 crc kubenswrapper[4780]: I0219 10:11:16.938829 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:11:16 crc kubenswrapper[4780]: E0219 10:11:16.939613 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:11:30 crc kubenswrapper[4780]: I0219 10:11:30.939107 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:11:30 crc kubenswrapper[4780]: E0219 10:11:30.940232 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.898727 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:11:39 crc kubenswrapper[4780]: E0219 10:11:39.901426 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="extract-content" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.901556 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="extract-content" Feb 19 10:11:39 crc kubenswrapper[4780]: E0219 10:11:39.901661 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="registry-server" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.901750 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="registry-server" Feb 19 10:11:39 crc kubenswrapper[4780]: E0219 10:11:39.901841 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="extract-utilities" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.901926 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="extract-utilities" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.902434 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1651b232-d50c-483e-9f1c-3cb929310091" containerName="registry-server" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.905885 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.962643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxm7\" (UniqueName: \"kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.963503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.963896 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:39 crc kubenswrapper[4780]: I0219 10:11:39.969343 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.067947 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvxm7\" (UniqueName: \"kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.068067 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.068197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.068888 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.068889 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.108681 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvxm7\" (UniqueName: \"kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7\") pod \"redhat-operators-w9x5q\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.233862 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:11:40 crc kubenswrapper[4780]: I0219 10:11:40.832925 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:11:41 crc kubenswrapper[4780]: I0219 10:11:41.845238 4780 generic.go:334] "Generic (PLEG): container finished" podID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerID="3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1" exitCode=0 Feb 19 10:11:41 crc kubenswrapper[4780]: I0219 10:11:41.845365 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerDied","Data":"3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1"} Feb 19 10:11:41 crc kubenswrapper[4780]: I0219 10:11:41.845921 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerStarted","Data":"73dea5d3ca5cb148178672a20132f3fb0bde4924559beceb06e2df86575a93d1"} Feb 19 10:11:44 crc kubenswrapper[4780]: I0219 10:11:44.940593 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:11:44 crc kubenswrapper[4780]: E0219 10:11:44.942291 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:11:45 crc kubenswrapper[4780]: I0219 10:11:45.892010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerStarted","Data":"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5"} Feb 19 10:11:56 crc kubenswrapper[4780]: I0219 10:11:56.938697 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:11:56 crc kubenswrapper[4780]: E0219 10:11:56.940649 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:12:00 crc kubenswrapper[4780]: I0219 10:12:00.064492 4780 generic.go:334] "Generic (PLEG): container finished" podID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerID="7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5" exitCode=0 Feb 19 10:12:00 crc kubenswrapper[4780]: I0219 10:12:00.064574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerDied","Data":"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5"} Feb 19 10:12:01 crc kubenswrapper[4780]: I0219 10:12:01.082001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerStarted","Data":"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45"} Feb 19 10:12:01 crc kubenswrapper[4780]: I0219 10:12:01.123296 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w9x5q" podStartSLOduration=3.401489029 podStartE2EDuration="22.123239395s" podCreationTimestamp="2026-02-19 10:11:39 +0000 UTC" firstStartedPulling="2026-02-19 10:11:41.850225424 +0000 UTC m=+6644.593882873" lastFinishedPulling="2026-02-19 10:12:00.57197578 +0000 UTC m=+6663.315633239" observedRunningTime="2026-02-19 10:12:01.105553542 +0000 UTC m=+6663.849210991" watchObservedRunningTime="2026-02-19 10:12:01.123239395 +0000 UTC m=+6663.866896844" Feb 19 10:12:10 crc kubenswrapper[4780]: I0219 10:12:10.235145 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:10 crc kubenswrapper[4780]: I0219 10:12:10.235980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:10 crc kubenswrapper[4780]: I0219 10:12:10.938313 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.235738 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9"} Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.287594 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w9x5q" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="registry-server" probeResult="failure" output=< Feb 19 10:12:11 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:12:11 crc kubenswrapper[4780]: > Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.339446 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.342394 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.355540 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.430387 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.430541 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsx7\" (UniqueName: \"kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.430588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.533884 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.534086 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsx7\" (UniqueName: \"kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.534192 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.534683 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.534965 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.559670 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsx7\" (UniqueName: \"kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7\") pod \"redhat-marketplace-4hvdg\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:11 crc kubenswrapper[4780]: I0219 10:12:11.675548 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:12 crc kubenswrapper[4780]: W0219 10:12:12.316943 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b76d358_6017_49e3_9c4a_abceb8257cf4.slice/crio-4f26a61dcd0f3cb52675101255bfc9d8bffd253da41acbf786a5c875cc66ca5f WatchSource:0}: Error finding container 4f26a61dcd0f3cb52675101255bfc9d8bffd253da41acbf786a5c875cc66ca5f: Status 404 returned error can't find the container with id 4f26a61dcd0f3cb52675101255bfc9d8bffd253da41acbf786a5c875cc66ca5f Feb 19 10:12:12 crc kubenswrapper[4780]: I0219 10:12:12.366168 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:13 crc kubenswrapper[4780]: I0219 10:12:13.263628 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerID="7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc" exitCode=0 Feb 19 10:12:13 crc kubenswrapper[4780]: I0219 10:12:13.263788 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerDied","Data":"7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc"} Feb 19 10:12:13 crc kubenswrapper[4780]: I0219 10:12:13.264158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerStarted","Data":"4f26a61dcd0f3cb52675101255bfc9d8bffd253da41acbf786a5c875cc66ca5f"} Feb 19 10:12:15 crc kubenswrapper[4780]: I0219 10:12:15.287310 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerStarted","Data":"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd"} Feb 19 10:12:17 crc kubenswrapper[4780]: I0219 10:12:17.315647 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerID="bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd" exitCode=0 Feb 19 10:12:17 crc kubenswrapper[4780]: I0219 10:12:17.315743 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerDied","Data":"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd"} Feb 19 10:12:19 crc kubenswrapper[4780]: I0219 10:12:19.349828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerStarted","Data":"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a"} Feb 19 10:12:19 crc kubenswrapper[4780]: I0219 10:12:19.375592 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4hvdg" podStartSLOduration=3.141393756 podStartE2EDuration="8.375561762s" podCreationTimestamp="2026-02-19 10:12:11 +0000 UTC" firstStartedPulling="2026-02-19 10:12:13.267029698 +0000 UTC m=+6676.010687147" lastFinishedPulling="2026-02-19 10:12:18.501197704 +0000 UTC m=+6681.244855153" observedRunningTime="2026-02-19 10:12:19.369055248 +0000 UTC m=+6682.112712697" watchObservedRunningTime="2026-02-19 10:12:19.375561762 +0000 UTC m=+6682.119219211" Feb 19 10:12:20 crc kubenswrapper[4780]: I0219 10:12:20.288247 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:20 crc kubenswrapper[4780]: I0219 10:12:20.354346 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:20 crc kubenswrapper[4780]: I0219 10:12:20.599827 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:12:21 crc kubenswrapper[4780]: I0219 10:12:21.378029 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w9x5q" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="registry-server" containerID="cri-o://20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45" gracePeriod=2 Feb 19 10:12:21 crc kubenswrapper[4780]: I0219 10:12:21.677383 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:21 crc kubenswrapper[4780]: I0219 10:12:21.677819 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:21 crc kubenswrapper[4780]: I0219 10:12:21.742662 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:21 crc kubenswrapper[4780]: I0219 10:12:21.989167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.066706 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content\") pod \"3ffa6752-6f79-431f-9299-3f67b3535dc8\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.067356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities\") pod \"3ffa6752-6f79-431f-9299-3f67b3535dc8\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.067566 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvxm7\" (UniqueName: \"kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7\") pod \"3ffa6752-6f79-431f-9299-3f67b3535dc8\" (UID: \"3ffa6752-6f79-431f-9299-3f67b3535dc8\") " Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.068103 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities" (OuterVolumeSpecName: "utilities") pod "3ffa6752-6f79-431f-9299-3f67b3535dc8" (UID: "3ffa6752-6f79-431f-9299-3f67b3535dc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.069195 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.082794 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7" (OuterVolumeSpecName: "kube-api-access-gvxm7") pod "3ffa6752-6f79-431f-9299-3f67b3535dc8" (UID: "3ffa6752-6f79-431f-9299-3f67b3535dc8"). InnerVolumeSpecName "kube-api-access-gvxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.175196 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvxm7\" (UniqueName: \"kubernetes.io/projected/3ffa6752-6f79-431f-9299-3f67b3535dc8-kube-api-access-gvxm7\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.211811 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ffa6752-6f79-431f-9299-3f67b3535dc8" (UID: "3ffa6752-6f79-431f-9299-3f67b3535dc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.278297 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffa6752-6f79-431f-9299-3f67b3535dc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.394826 4780 generic.go:334] "Generic (PLEG): container finished" podID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerID="20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45" exitCode=0 Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.395304 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerDied","Data":"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45"} Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.395363 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9x5q" event={"ID":"3ffa6752-6f79-431f-9299-3f67b3535dc8","Type":"ContainerDied","Data":"73dea5d3ca5cb148178672a20132f3fb0bde4924559beceb06e2df86575a93d1"} Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.395387 4780 scope.go:117] "RemoveContainer" containerID="20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.396416 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9x5q" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.420211 4780 scope.go:117] "RemoveContainer" containerID="7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.444997 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.461468 4780 scope.go:117] "RemoveContainer" containerID="3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.462285 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w9x5q"] Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.508844 4780 scope.go:117] "RemoveContainer" containerID="20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45" Feb 19 10:12:22 crc kubenswrapper[4780]: E0219 10:12:22.509394 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45\": container with ID starting with 20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45 not found: ID does not exist" containerID="20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.509441 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45"} err="failed to get container status \"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45\": rpc error: code = NotFound desc = could not find container \"20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45\": container with ID starting with 20ba23bc1659f04b9a34715cea8fc5a1eb979e9e8c95a6956ab444900c8e0e45 not found: ID does not exist" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.509469 4780 scope.go:117] "RemoveContainer" containerID="7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5" Feb 19 10:12:22 crc kubenswrapper[4780]: E0219 10:12:22.509924 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5\": container with ID starting with 7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5 not found: ID does not exist" containerID="7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.509959 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5"} err="failed to get container status \"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5\": rpc error: code = NotFound desc = could not find container \"7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5\": container with ID starting with 7cc5eb75668ff89c9464ff7420b393c405d21e5b477139307dd409a42db743b5 not found: ID does not exist" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.509979 4780 scope.go:117] "RemoveContainer" containerID="3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1" Feb 19 10:12:22 crc kubenswrapper[4780]: E0219 10:12:22.510281 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1\": container with ID starting with 3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1 not found: ID does not exist" containerID="3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1" Feb 19 10:12:22 crc kubenswrapper[4780]: I0219 10:12:22.510312 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1"} err="failed to get container status \"3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1\": rpc error: code = NotFound desc = could not find container \"3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1\": container with ID starting with 3682b9a4bcbef7b862d054123cb309fb971f4b73bcdc20badc60f868ee4204d1 not found: ID does not exist" Feb 19 10:12:23 crc kubenswrapper[4780]: I0219 10:12:23.954537 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" path="/var/lib/kubelet/pods/3ffa6752-6f79-431f-9299-3f67b3535dc8/volumes" Feb 19 10:12:31 crc kubenswrapper[4780]: I0219 10:12:31.730732 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:31 crc kubenswrapper[4780]: I0219 10:12:31.802982 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:32 crc kubenswrapper[4780]: I0219 10:12:32.522727 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4hvdg" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="registry-server" containerID="cri-o://c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a" gracePeriod=2 Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.084380 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.192282 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities\") pod \"6b76d358-6017-49e3-9c4a-abceb8257cf4\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.192780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsx7\" (UniqueName: \"kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7\") pod \"6b76d358-6017-49e3-9c4a-abceb8257cf4\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.192884 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content\") pod \"6b76d358-6017-49e3-9c4a-abceb8257cf4\" (UID: \"6b76d358-6017-49e3-9c4a-abceb8257cf4\") " Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.193733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities" (OuterVolumeSpecName: "utilities") pod "6b76d358-6017-49e3-9c4a-abceb8257cf4" (UID: "6b76d358-6017-49e3-9c4a-abceb8257cf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.194164 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.204468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7" (OuterVolumeSpecName: "kube-api-access-4wsx7") pod "6b76d358-6017-49e3-9c4a-abceb8257cf4" (UID: "6b76d358-6017-49e3-9c4a-abceb8257cf4"). InnerVolumeSpecName "kube-api-access-4wsx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.228638 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b76d358-6017-49e3-9c4a-abceb8257cf4" (UID: "6b76d358-6017-49e3-9c4a-abceb8257cf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.298379 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsx7\" (UniqueName: \"kubernetes.io/projected/6b76d358-6017-49e3-9c4a-abceb8257cf4-kube-api-access-4wsx7\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.298431 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b76d358-6017-49e3-9c4a-abceb8257cf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.538212 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerID="c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a" exitCode=0 Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.538277 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hvdg" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.538303 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerDied","Data":"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a"} Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.538776 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hvdg" event={"ID":"6b76d358-6017-49e3-9c4a-abceb8257cf4","Type":"ContainerDied","Data":"4f26a61dcd0f3cb52675101255bfc9d8bffd253da41acbf786a5c875cc66ca5f"} Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.538804 4780 scope.go:117] "RemoveContainer" containerID="c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.576399 4780 scope.go:117] "RemoveContainer" containerID="bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.591582 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.602027 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hvdg"] Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.617590 4780 scope.go:117] "RemoveContainer" containerID="7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.670309 4780 scope.go:117] "RemoveContainer" containerID="c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a" Feb 19 10:12:33 crc kubenswrapper[4780]: E0219 10:12:33.674114 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a\": container with ID starting with c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a not found: ID does not exist" containerID="c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.674186 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a"} err="failed to get container status \"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a\": rpc error: code = NotFound desc = could not find container \"c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a\": container with ID starting with c06bb503e322c3ced9bacbcdeb3f76520c719c86088156d1fd2b5d053b805e1a not found: ID does not exist" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.674219 4780 scope.go:117] "RemoveContainer" containerID="bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd" Feb 19 10:12:33 crc kubenswrapper[4780]: E0219 10:12:33.674983 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd\": container with ID starting with bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd not found: ID does not exist" containerID="bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.675064 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd"} err="failed to get container status \"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd\": rpc error: code = NotFound desc = could not find container \"bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd\": container with ID starting with bdb5dcdc05e89cda8d773bb3a17a576f7aaebda43c9bbb2f44024d4c7efc4fdd not found: ID does not exist" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.675110 4780 scope.go:117] "RemoveContainer" containerID="7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc" Feb 19 10:12:33 crc kubenswrapper[4780]: E0219 10:12:33.676636 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc\": container with ID starting with 7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc not found: ID does not exist" containerID="7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.676672 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc"} err="failed to get container status \"7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc\": rpc error: code = NotFound desc = could not find container \"7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc\": container with ID starting with 7e6ee58d954cda75ee44aca4e79cb95c1ae398b06509c074b84d4b9c27f873bc not found: ID does not exist" Feb 19 10:12:33 crc kubenswrapper[4780]: I0219 10:12:33.954715 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" path="/var/lib/kubelet/pods/6b76d358-6017-49e3-9c4a-abceb8257cf4/volumes" Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.055513 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-f67lq"] Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.067950 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-353b-account-create-update-smsfg"] Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.079191 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-353b-account-create-update-smsfg"] Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.089508 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-f67lq"] Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.961416 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249c7d5e-f058-4d69-8e12-69472cf9c8b0" path="/var/lib/kubelet/pods/249c7d5e-f058-4d69-8e12-69472cf9c8b0/volumes" Feb 19 10:12:37 crc kubenswrapper[4780]: I0219 10:12:37.966058 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96e2eaa-7162-49ea-adf8-66cc39516d9c" path="/var/lib/kubelet/pods/a96e2eaa-7162-49ea-adf8-66cc39516d9c/volumes" Feb 19 10:12:49 crc kubenswrapper[4780]: I0219 10:12:49.070778 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-ckt4w"] Feb 19 10:12:49 crc kubenswrapper[4780]: I0219 10:12:49.082494 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-ckt4w"] Feb 19 10:12:49 crc kubenswrapper[4780]: I0219 10:12:49.969859 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb40d031-7af4-4922-bbe3-9d14ab3e80ff" path="/var/lib/kubelet/pods/bb40d031-7af4-4922-bbe3-9d14ab3e80ff/volumes" Feb 19 10:13:14 crc kubenswrapper[4780]: I0219 10:13:14.078381 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-963e-account-create-update-hlbfq"] Feb 19 10:13:14 crc kubenswrapper[4780]: I0219 10:13:14.095208 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-tpz2f"] Feb 19 10:13:14 crc kubenswrapper[4780]: I0219 10:13:14.114423 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-tpz2f"] Feb 19 10:13:14 crc kubenswrapper[4780]: I0219 10:13:14.125245 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-963e-account-create-update-hlbfq"] Feb 19 10:13:15 crc kubenswrapper[4780]: I0219 10:13:15.962001 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0804b016-a025-4a0e-b9c9-1e7821b7d036" path="/var/lib/kubelet/pods/0804b016-a025-4a0e-b9c9-1e7821b7d036/volumes" Feb 19 10:13:15 crc kubenswrapper[4780]: I0219 10:13:15.964596 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab504879-e849-4ba6-8192-07859f31f11d" path="/var/lib/kubelet/pods/ab504879-e849-4ba6-8192-07859f31f11d/volumes" Feb 19 10:13:16 crc kubenswrapper[4780]: I0219 10:13:16.287692 4780 scope.go:117] "RemoveContainer" containerID="eebbddba5141dd70ad520909dc034d9e96b2cb524bc22dfa73a194e3305b6a14" Feb 19 10:13:16 crc kubenswrapper[4780]: I0219 10:13:16.340514 4780 scope.go:117] "RemoveContainer" containerID="30fc7b8e3590a849452ff1aede5a708f562f72894e44984afaeae8805f3ada5c" Feb 19 10:13:16 crc kubenswrapper[4780]: I0219 10:13:16.401234 4780 scope.go:117] "RemoveContainer" containerID="d2df5cf98640ebf85f99699e4f5b9b749aba93771bb5160dc7dca34b2a398819" Feb 19 10:13:16 crc kubenswrapper[4780]: I0219 10:13:16.464214 4780 scope.go:117] "RemoveContainer" containerID="b6b9b8e85f4ea5d29151394b3a8bb6313dca01dc462bbf82fe2f5f5fd425ff07" Feb 19 10:13:16 crc kubenswrapper[4780]: I0219 10:13:16.515591 4780 scope.go:117] "RemoveContainer" containerID="7a673e9211bdd465bdf90dc5e8f198c812136e700e79b26380dc4a1da51eaac8" Feb 19 10:13:30 crc kubenswrapper[4780]: I0219 10:13:30.041700 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-tjbkn"] Feb 19 10:13:30 crc kubenswrapper[4780]: I0219 10:13:30.054525 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-tjbkn"] Feb 19 10:13:31 crc kubenswrapper[4780]: I0219 10:13:31.971791 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6657099e-4964-425d-8036-6d34d3c7faf4" path="/var/lib/kubelet/pods/6657099e-4964-425d-8036-6d34d3c7faf4/volumes" Feb 19 10:14:16 crc kubenswrapper[4780]: I0219 10:14:16.780785 4780 scope.go:117] "RemoveContainer" containerID="478b067e30d0d42789728b14d143d43c3ac92560f5e250ae3df6955e55cba3ea" Feb 19 10:14:36 crc kubenswrapper[4780]: I0219 10:14:36.336673 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:14:36 crc kubenswrapper[4780]: I0219 10:14:36.337552 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.158767 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160280 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="extract-utilities" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160303 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="extract-utilities" Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160323 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160330 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160350 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="extract-content" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160357 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="extract-content" Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160381 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="extract-content" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160412 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="extract-content" Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160439 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="extract-utilities" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160447 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="extract-utilities" Feb 19 10:14:48 crc kubenswrapper[4780]: E0219 10:14:48.160484 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160490 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160738 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b76d358-6017-49e3-9c4a-abceb8257cf4" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.160755 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffa6752-6f79-431f-9299-3f67b3535dc8" containerName="registry-server" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.164222 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.181502 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.286238 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.286395 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.286436 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8lf\" (UniqueName: \"kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.389092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.389207 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8lf\" (UniqueName: \"kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.389422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.389883 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.390141 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.414776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8lf\" (UniqueName: \"kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf\") pod \"certified-operators-7k98g\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:48 crc kubenswrapper[4780]: I0219 10:14:48.496440 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:49 crc kubenswrapper[4780]: I0219 10:14:49.011582 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:14:49 crc kubenswrapper[4780]: I0219 10:14:49.089247 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerStarted","Data":"5a4f0632d82d2d3017a7bfe673f100207f49f638e8ff438d1b1ee07b93910e88"} Feb 19 10:14:50 crc kubenswrapper[4780]: I0219 10:14:50.102856 4780 generic.go:334] "Generic (PLEG): container finished" podID="a7c43d28-790f-441c-9005-c11816032a72" containerID="e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe" exitCode=0 Feb 19 10:14:50 crc kubenswrapper[4780]: I0219 10:14:50.102916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerDied","Data":"e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe"} Feb 19 10:14:50 crc kubenswrapper[4780]: I0219 10:14:50.105774 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:14:51 crc kubenswrapper[4780]: I0219 10:14:51.117608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerStarted","Data":"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db"} Feb 19 10:14:54 crc kubenswrapper[4780]: I0219 10:14:54.152740 4780 generic.go:334] "Generic (PLEG): container finished" podID="a7c43d28-790f-441c-9005-c11816032a72" containerID="e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db" exitCode=0 Feb 19 10:14:54 crc kubenswrapper[4780]: I0219 10:14:54.152829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerDied","Data":"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db"} Feb 19 10:14:55 crc kubenswrapper[4780]: I0219 10:14:55.165948 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerStarted","Data":"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463"} Feb 19 10:14:55 crc kubenswrapper[4780]: I0219 10:14:55.192354 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7k98g" podStartSLOduration=2.7283937099999997 podStartE2EDuration="7.192332862s" podCreationTimestamp="2026-02-19 10:14:48 +0000 UTC" firstStartedPulling="2026-02-19 10:14:50.105441401 +0000 UTC m=+6832.849098850" lastFinishedPulling="2026-02-19 10:14:54.569380543 +0000 UTC m=+6837.313038002" observedRunningTime="2026-02-19 10:14:55.185702835 +0000 UTC m=+6837.929360284" watchObservedRunningTime="2026-02-19 10:14:55.192332862 +0000 UTC m=+6837.935990311" Feb 19 10:14:58 crc kubenswrapper[4780]: I0219 10:14:58.496978 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:58 crc kubenswrapper[4780]: I0219 10:14:58.497636 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:58 crc kubenswrapper[4780]: I0219 10:14:58.551541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:59 crc kubenswrapper[4780]: I0219 10:14:59.269701 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:14:59 crc kubenswrapper[4780]: I0219 10:14:59.341476 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.178660 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q"] Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.180979 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.191006 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.192414 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q"] Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.193110 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.289054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.289116 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.289518 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxv87\" (UniqueName: \"kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.393380 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.393436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.393533 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxv87\" (UniqueName: \"kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.395672 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.409053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.412236 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxv87\" (UniqueName: \"kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87\") pod \"collect-profiles-29524935-z7p6q\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:00 crc kubenswrapper[4780]: I0219 10:15:00.509732 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.045992 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q"] Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.235178 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7k98g" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="registry-server" containerID="cri-o://b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463" gracePeriod=2 Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.235581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" event={"ID":"360350fe-48d7-4722-9a17-5a20018baa6f","Type":"ContainerStarted","Data":"d202eb166a0cf89120287ab3cb6e4e9ceef50fbfabbffd49e9b0975419083463"} Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.818496 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.933635 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t8lf\" (UniqueName: \"kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf\") pod \"a7c43d28-790f-441c-9005-c11816032a72\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.933725 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities\") pod \"a7c43d28-790f-441c-9005-c11816032a72\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.933793 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content\") pod \"a7c43d28-790f-441c-9005-c11816032a72\" (UID: \"a7c43d28-790f-441c-9005-c11816032a72\") " Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.935453 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities" (OuterVolumeSpecName: "utilities") pod "a7c43d28-790f-441c-9005-c11816032a72" (UID: "a7c43d28-790f-441c-9005-c11816032a72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:15:01 crc kubenswrapper[4780]: I0219 10:15:01.954535 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf" (OuterVolumeSpecName: "kube-api-access-7t8lf") pod "a7c43d28-790f-441c-9005-c11816032a72" (UID: "a7c43d28-790f-441c-9005-c11816032a72"). InnerVolumeSpecName "kube-api-access-7t8lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.041999 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t8lf\" (UniqueName: \"kubernetes.io/projected/a7c43d28-790f-441c-9005-c11816032a72-kube-api-access-7t8lf\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.042063 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.248547 4780 generic.go:334] "Generic (PLEG): container finished" podID="a7c43d28-790f-441c-9005-c11816032a72" containerID="b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463" exitCode=0 Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.248633 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k98g" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.248694 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerDied","Data":"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463"} Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.252434 4780 scope.go:117] "RemoveContainer" containerID="b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.252343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k98g" event={"ID":"a7c43d28-790f-441c-9005-c11816032a72","Type":"ContainerDied","Data":"5a4f0632d82d2d3017a7bfe673f100207f49f638e8ff438d1b1ee07b93910e88"} Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.257236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" event={"ID":"360350fe-48d7-4722-9a17-5a20018baa6f","Type":"ContainerStarted","Data":"ea5fd602e72fbf5a4acfc073c8ebd10ad437fc17c4a7eecc58d3975d52cfe33b"} Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.295383 4780 scope.go:117] "RemoveContainer" containerID="e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.330315 4780 scope.go:117] "RemoveContainer" containerID="e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.347251 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" podStartSLOduration=2.347218871 podStartE2EDuration="2.347218871s" podCreationTimestamp="2026-02-19 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:15:02.288137421 +0000 UTC m=+6845.031794870" watchObservedRunningTime="2026-02-19 10:15:02.347218871 +0000 UTC m=+6845.090876320" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.419656 4780 scope.go:117] "RemoveContainer" containerID="b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463" Feb 19 10:15:02 crc kubenswrapper[4780]: E0219 10:15:02.423751 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463\": container with ID starting with b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463 not found: ID does not exist" containerID="b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.424473 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463"} err="failed to get container status \"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463\": rpc error: code = NotFound desc = could not find container \"b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463\": container with ID starting with b0cae50e5642f2d19e2196b2dacc78170425fb191a85aa051b777e9d93eeb463 not found: ID does not exist" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.424573 4780 scope.go:117] "RemoveContainer" containerID="e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db" Feb 19 10:15:02 crc kubenswrapper[4780]: E0219 10:15:02.428762 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db\": container with ID starting with e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db not found: ID does not exist" containerID="e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.428958 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db"} err="failed to get container status \"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db\": rpc error: code = NotFound desc = could not find container \"e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db\": container with ID starting with e10dda8e1e412bb33b74dd9765421cd0576ef61ac208854aecf4c7d3719ec9db not found: ID does not exist" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.429076 4780 scope.go:117] "RemoveContainer" containerID="e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe" Feb 19 10:15:02 crc kubenswrapper[4780]: E0219 10:15:02.429736 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe\": container with ID starting with e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe not found: ID does not exist" containerID="e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe" Feb 19 10:15:02 crc kubenswrapper[4780]: I0219 10:15:02.429788 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe"} err="failed to get container status \"e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe\": rpc error: code = NotFound desc = could not find container \"e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe\": container with ID starting with e7e94c64d8a9f573387cf1d05f2352a6a162c0737013ac1ec0919d90e3104afe not found: ID does not exist" Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.157019 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7c43d28-790f-441c-9005-c11816032a72" (UID: "a7c43d28-790f-441c-9005-c11816032a72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.170543 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c43d28-790f-441c-9005-c11816032a72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.273581 4780 generic.go:334] "Generic (PLEG): container finished" podID="360350fe-48d7-4722-9a17-5a20018baa6f" containerID="ea5fd602e72fbf5a4acfc073c8ebd10ad437fc17c4a7eecc58d3975d52cfe33b" exitCode=0 Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.273636 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" event={"ID":"360350fe-48d7-4722-9a17-5a20018baa6f","Type":"ContainerDied","Data":"ea5fd602e72fbf5a4acfc073c8ebd10ad437fc17c4a7eecc58d3975d52cfe33b"} Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.494187 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.505371 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7k98g"] Feb 19 10:15:03 crc kubenswrapper[4780]: I0219 10:15:03.960823 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c43d28-790f-441c-9005-c11816032a72" path="/var/lib/kubelet/pods/a7c43d28-790f-441c-9005-c11816032a72/volumes" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.689846 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.813145 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxv87\" (UniqueName: \"kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87\") pod \"360350fe-48d7-4722-9a17-5a20018baa6f\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.813237 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume\") pod \"360350fe-48d7-4722-9a17-5a20018baa6f\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.813334 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume\") pod \"360350fe-48d7-4722-9a17-5a20018baa6f\" (UID: \"360350fe-48d7-4722-9a17-5a20018baa6f\") " Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.814502 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "360350fe-48d7-4722-9a17-5a20018baa6f" (UID: "360350fe-48d7-4722-9a17-5a20018baa6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.820617 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87" (OuterVolumeSpecName: "kube-api-access-lxv87") pod "360350fe-48d7-4722-9a17-5a20018baa6f" (UID: "360350fe-48d7-4722-9a17-5a20018baa6f"). InnerVolumeSpecName "kube-api-access-lxv87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.820722 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "360350fe-48d7-4722-9a17-5a20018baa6f" (UID: "360350fe-48d7-4722-9a17-5a20018baa6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.917360 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxv87\" (UniqueName: \"kubernetes.io/projected/360350fe-48d7-4722-9a17-5a20018baa6f-kube-api-access-lxv87\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.917410 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360350fe-48d7-4722-9a17-5a20018baa6f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4780]: I0219 10:15:04.917429 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360350fe-48d7-4722-9a17-5a20018baa6f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.296274 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" event={"ID":"360350fe-48d7-4722-9a17-5a20018baa6f","Type":"ContainerDied","Data":"d202eb166a0cf89120287ab3cb6e4e9ceef50fbfabbffd49e9b0975419083463"} Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.296632 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d202eb166a0cf89120287ab3cb6e4e9ceef50fbfabbffd49e9b0975419083463" Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.296404 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q" Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.780021 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv"] Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.790958 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-tdjlv"] Feb 19 10:15:05 crc kubenswrapper[4780]: I0219 10:15:05.964985 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4dff2c0-9172-46ce-815e-a844fe38de43" path="/var/lib/kubelet/pods/b4dff2c0-9172-46ce-815e-a844fe38de43/volumes" Feb 19 10:15:06 crc kubenswrapper[4780]: I0219 10:15:06.336194 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:15:06 crc kubenswrapper[4780]: I0219 10:15:06.336636 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:15:16 crc kubenswrapper[4780]: I0219 10:15:16.883398 4780 scope.go:117] "RemoveContainer" containerID="5d6adaca4a6f5001697489c44219015174615f185f5755550008db3c72ec1a5f" Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.336320 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.337418 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.337480 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.338883 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.338953 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9" gracePeriod=600 Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.652021 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9" exitCode=0 Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.652250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9"} Feb 19 10:15:36 crc kubenswrapper[4780]: I0219 10:15:36.652388 4780 scope.go:117] "RemoveContainer" containerID="57060c769c2d27f6f66d5ba476c8bc400ef36c1af2a32157edd22bd439dbe987" Feb 19 10:15:37 crc kubenswrapper[4780]: I0219 10:15:37.670073 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c"} Feb 19 10:16:30 crc kubenswrapper[4780]: I0219 10:16:30.257945 4780 generic.go:334] "Generic (PLEG): container finished" podID="d4bcad9c-e8e3-4090-ac8b-015bfce05a61" containerID="701a54c2c20b83e88f5ae5fd1dc987af2d2060acea87443d5fb4135d64a017fc" exitCode=0 Feb 19 10:16:30 crc kubenswrapper[4780]: I0219 10:16:30.258056 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" event={"ID":"d4bcad9c-e8e3-4090-ac8b-015bfce05a61","Type":"ContainerDied","Data":"701a54c2c20b83e88f5ae5fd1dc987af2d2060acea87443d5fb4135d64a017fc"} Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.812750 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.887507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph\") pod \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.887691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxcb\" (UniqueName: \"kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb\") pod \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.887714 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle\") pod \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.887758 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory\") pod \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.888081 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1\") pod \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\" (UID: \"d4bcad9c-e8e3-4090-ac8b-015bfce05a61\") " Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.897249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph" (OuterVolumeSpecName: "ceph") pod "d4bcad9c-e8e3-4090-ac8b-015bfce05a61" (UID: "d4bcad9c-e8e3-4090-ac8b-015bfce05a61"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.904341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "d4bcad9c-e8e3-4090-ac8b-015bfce05a61" (UID: "d4bcad9c-e8e3-4090-ac8b-015bfce05a61"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.905918 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb" (OuterVolumeSpecName: "kube-api-access-qkxcb") pod "d4bcad9c-e8e3-4090-ac8b-015bfce05a61" (UID: "d4bcad9c-e8e3-4090-ac8b-015bfce05a61"). InnerVolumeSpecName "kube-api-access-qkxcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.927424 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory" (OuterVolumeSpecName: "inventory") pod "d4bcad9c-e8e3-4090-ac8b-015bfce05a61" (UID: "d4bcad9c-e8e3-4090-ac8b-015bfce05a61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.944380 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d4bcad9c-e8e3-4090-ac8b-015bfce05a61" (UID: "d4bcad9c-e8e3-4090-ac8b-015bfce05a61"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.992321 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.992362 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.992376 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkxcb\" (UniqueName: \"kubernetes.io/projected/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-kube-api-access-qkxcb\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.992386 4780 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:31 crc kubenswrapper[4780]: I0219 10:16:31.992398 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4bcad9c-e8e3-4090-ac8b-015bfce05a61-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:32 crc kubenswrapper[4780]: I0219 10:16:32.279996 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" event={"ID":"d4bcad9c-e8e3-4090-ac8b-015bfce05a61","Type":"ContainerDied","Data":"56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03"} Feb 19 10:16:32 crc kubenswrapper[4780]: I0219 10:16:32.280063 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56eb6a4bb19431d7f3bf3b51cd7c5183f11d21afabd5c5d0ad21eb53ae2f8a03" Feb 19 10:16:32 crc kubenswrapper[4780]: I0219 10:16:32.280072 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.530417 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-4xbq5"] Feb 19 10:16:37 crc kubenswrapper[4780]: E0219 10:16:37.531623 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bcad9c-e8e3-4090-ac8b-015bfce05a61" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.531643 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bcad9c-e8e3-4090-ac8b-015bfce05a61" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 10:16:37 crc kubenswrapper[4780]: E0219 10:16:37.531654 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360350fe-48d7-4722-9a17-5a20018baa6f" containerName="collect-profiles" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.531662 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="360350fe-48d7-4722-9a17-5a20018baa6f" containerName="collect-profiles" Feb 19 10:16:37 crc kubenswrapper[4780]: E0219 10:16:37.531690 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="extract-utilities" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.531699 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="extract-utilities" Feb 19 10:16:37 crc kubenswrapper[4780]: E0219 10:16:37.531710 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="registry-server" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.531718 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="registry-server" Feb 19 10:16:37 crc kubenswrapper[4780]: E0219 10:16:37.531734 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="extract-content" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.531742 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="extract-content" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.532012 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bcad9c-e8e3-4090-ac8b-015bfce05a61" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.532031 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c43d28-790f-441c-9005-c11816032a72" containerName="registry-server" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.532052 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="360350fe-48d7-4722-9a17-5a20018baa6f" containerName="collect-profiles" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.533078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.544619 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.545051 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.545195 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.547581 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.548487 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-4xbq5"] Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.646962 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.647043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.647306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8qr\" (UniqueName: \"kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.647505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.647577 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.750374 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.750433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.750584 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv8qr\" (UniqueName: \"kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.750642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.750669 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.757970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.758115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.758752 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.759988 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.771418 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv8qr\" (UniqueName: \"kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr\") pod \"bootstrap-openstack-openstack-cell1-4xbq5\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:37 crc kubenswrapper[4780]: I0219 10:16:37.860254 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:16:38 crc kubenswrapper[4780]: I0219 10:16:38.513776 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-4xbq5"] Feb 19 10:16:39 crc kubenswrapper[4780]: I0219 10:16:39.350113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" event={"ID":"ce60d62e-00ef-4205-a928-7df63a1e5837","Type":"ContainerStarted","Data":"fa1176adbda5d35e564f33bb87ce459208887a05922e67456f1de231f5e949f2"} Feb 19 10:16:39 crc kubenswrapper[4780]: I0219 10:16:39.351094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" event={"ID":"ce60d62e-00ef-4205-a928-7df63a1e5837","Type":"ContainerStarted","Data":"4d62f2671d049305f347bef3f08dbc61bbd7b7f49586dec19e8c37029179e513"} Feb 19 10:16:39 crc kubenswrapper[4780]: I0219 10:16:39.377373 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" podStartSLOduration=1.9479918349999998 podStartE2EDuration="2.377336851s" podCreationTimestamp="2026-02-19 10:16:37 +0000 UTC" firstStartedPulling="2026-02-19 10:16:38.513762755 +0000 UTC m=+6941.257420234" lastFinishedPulling="2026-02-19 10:16:38.943107801 +0000 UTC m=+6941.686765250" observedRunningTime="2026-02-19 10:16:39.369667128 +0000 UTC m=+6942.113324577" watchObservedRunningTime="2026-02-19 10:16:39.377336851 +0000 UTC m=+6942.120994300" Feb 19 10:17:36 crc kubenswrapper[4780]: I0219 10:17:36.336578 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:17:36 crc kubenswrapper[4780]: I0219 10:17:36.337307 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:06 crc kubenswrapper[4780]: I0219 10:18:06.336934 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:06 crc kubenswrapper[4780]: I0219 10:18:06.337671 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.336773 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.337481 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.337530 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.338396 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.338452 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" gracePeriod=600 Feb 19 10:18:36 crc kubenswrapper[4780]: E0219 10:18:36.461344 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.702635 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" exitCode=0 Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.702685 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c"} Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.702725 4780 scope.go:117] "RemoveContainer" containerID="3473dbf965cd91e8ed4c387e30616cc39ba732d29a5b17a506eafaab158e0bd9" Feb 19 10:18:36 crc kubenswrapper[4780]: I0219 10:18:36.703483 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:18:36 crc kubenswrapper[4780]: E0219 10:18:36.703831 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:18:50 crc kubenswrapper[4780]: I0219 10:18:50.939753 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:18:50 crc kubenswrapper[4780]: E0219 10:18:50.941385 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:01 crc kubenswrapper[4780]: I0219 10:19:01.939157 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:19:01 crc kubenswrapper[4780]: E0219 10:19:01.940224 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:13 crc kubenswrapper[4780]: I0219 10:19:13.938826 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:19:13 crc kubenswrapper[4780]: E0219 10:19:13.939858 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:24 crc kubenswrapper[4780]: I0219 10:19:24.939951 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:19:24 crc kubenswrapper[4780]: E0219 10:19:24.940985 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:39 crc kubenswrapper[4780]: I0219 10:19:39.938571 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:19:39 crc kubenswrapper[4780]: E0219 10:19:39.939653 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:53 crc kubenswrapper[4780]: I0219 10:19:53.918398 4780 generic.go:334] "Generic (PLEG): container finished" podID="ce60d62e-00ef-4205-a928-7df63a1e5837" containerID="fa1176adbda5d35e564f33bb87ce459208887a05922e67456f1de231f5e949f2" exitCode=0 Feb 19 10:19:53 crc kubenswrapper[4780]: I0219 10:19:53.918625 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" event={"ID":"ce60d62e-00ef-4205-a928-7df63a1e5837","Type":"ContainerDied","Data":"fa1176adbda5d35e564f33bb87ce459208887a05922e67456f1de231f5e949f2"} Feb 19 10:19:54 crc kubenswrapper[4780]: I0219 10:19:54.938581 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:19:54 crc kubenswrapper[4780]: E0219 10:19:54.939955 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.457055 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.535110 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory\") pod \"ce60d62e-00ef-4205-a928-7df63a1e5837\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.535602 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv8qr\" (UniqueName: \"kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr\") pod \"ce60d62e-00ef-4205-a928-7df63a1e5837\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.535665 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle\") pod \"ce60d62e-00ef-4205-a928-7df63a1e5837\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.535685 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph\") pod \"ce60d62e-00ef-4205-a928-7df63a1e5837\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.535911 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1\") pod \"ce60d62e-00ef-4205-a928-7df63a1e5837\" (UID: \"ce60d62e-00ef-4205-a928-7df63a1e5837\") " Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.542227 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr" (OuterVolumeSpecName: "kube-api-access-tv8qr") pod "ce60d62e-00ef-4205-a928-7df63a1e5837" (UID: "ce60d62e-00ef-4205-a928-7df63a1e5837"). InnerVolumeSpecName "kube-api-access-tv8qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.542253 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ce60d62e-00ef-4205-a928-7df63a1e5837" (UID: "ce60d62e-00ef-4205-a928-7df63a1e5837"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.542603 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph" (OuterVolumeSpecName: "ceph") pod "ce60d62e-00ef-4205-a928-7df63a1e5837" (UID: "ce60d62e-00ef-4205-a928-7df63a1e5837"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.574493 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory" (OuterVolumeSpecName: "inventory") pod "ce60d62e-00ef-4205-a928-7df63a1e5837" (UID: "ce60d62e-00ef-4205-a928-7df63a1e5837"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.582821 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ce60d62e-00ef-4205-a928-7df63a1e5837" (UID: "ce60d62e-00ef-4205-a928-7df63a1e5837"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.639697 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.639757 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.639772 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv8qr\" (UniqueName: \"kubernetes.io/projected/ce60d62e-00ef-4205-a928-7df63a1e5837-kube-api-access-tv8qr\") on node \"crc\" DevicePath \"\"" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.639783 4780 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.639796 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ce60d62e-00ef-4205-a928-7df63a1e5837-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.943594 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.956014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-4xbq5" event={"ID":"ce60d62e-00ef-4205-a928-7df63a1e5837","Type":"ContainerDied","Data":"4d62f2671d049305f347bef3f08dbc61bbd7b7f49586dec19e8c37029179e513"} Feb 19 10:19:55 crc kubenswrapper[4780]: I0219 10:19:55.956082 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d62f2671d049305f347bef3f08dbc61bbd7b7f49586dec19e8c37029179e513" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.071401 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6st2r"] Feb 19 10:19:56 crc kubenswrapper[4780]: E0219 10:19:56.072142 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce60d62e-00ef-4205-a928-7df63a1e5837" containerName="bootstrap-openstack-openstack-cell1" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.072173 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce60d62e-00ef-4205-a928-7df63a1e5837" containerName="bootstrap-openstack-openstack-cell1" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.072495 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce60d62e-00ef-4205-a928-7df63a1e5837" containerName="bootstrap-openstack-openstack-cell1" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.074530 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.077213 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.077497 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.077700 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.078357 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.084605 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6st2r"] Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.156629 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.156710 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.156766 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgsm\" (UniqueName: \"kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.156796 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.259535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.259647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.259684 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgsm\" (UniqueName: \"kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.259719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.268097 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.268385 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.271173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.281061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgsm\" (UniqueName: \"kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm\") pod \"download-cache-openstack-openstack-cell1-6st2r\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.397608 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:19:56 crc kubenswrapper[4780]: I0219 10:19:56.993298 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6st2r"] Feb 19 10:19:57 crc kubenswrapper[4780]: I0219 10:19:57.000897 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:19:57 crc kubenswrapper[4780]: I0219 10:19:57.990623 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" event={"ID":"ae23b8f2-95fe-4b64-a80e-c6687b239734","Type":"ContainerStarted","Data":"aeb24262b2a9eefa218080400f81d587940736477046a94cf60e2bcb7694cb76"} Feb 19 10:19:57 crc kubenswrapper[4780]: I0219 10:19:57.990737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" event={"ID":"ae23b8f2-95fe-4b64-a80e-c6687b239734","Type":"ContainerStarted","Data":"3d9f838ff1f775002693ca655d7d3e53d0c82c123afa030ef6d8fd4fc4a17b07"} Feb 19 10:19:58 crc kubenswrapper[4780]: I0219 10:19:58.022391 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" podStartSLOduration=1.628420285 podStartE2EDuration="2.022362899s" podCreationTimestamp="2026-02-19 10:19:56 +0000 UTC" firstStartedPulling="2026-02-19 10:19:57.000591053 +0000 UTC m=+7139.744248532" lastFinishedPulling="2026-02-19 10:19:57.394533687 +0000 UTC m=+7140.138191146" observedRunningTime="2026-02-19 10:19:58.014702045 +0000 UTC m=+7140.758359514" watchObservedRunningTime="2026-02-19 10:19:58.022362899 +0000 UTC m=+7140.766020348" Feb 19 10:20:05 crc kubenswrapper[4780]: I0219 10:20:05.938340 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:20:05 crc kubenswrapper[4780]: E0219 10:20:05.939415 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:20:16 crc kubenswrapper[4780]: I0219 10:20:16.940881 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:20:16 crc kubenswrapper[4780]: E0219 10:20:16.942080 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:20:28 crc kubenswrapper[4780]: I0219 10:20:28.939361 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:20:28 crc kubenswrapper[4780]: E0219 10:20:28.940558 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:20:39 crc kubenswrapper[4780]: I0219 10:20:39.938820 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:20:39 crc kubenswrapper[4780]: E0219 10:20:39.939938 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:20:50 crc kubenswrapper[4780]: I0219 10:20:50.938621 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:20:50 crc kubenswrapper[4780]: E0219 10:20:50.939605 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:04 crc kubenswrapper[4780]: I0219 10:21:04.940015 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:21:04 crc kubenswrapper[4780]: E0219 10:21:04.941183 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:15 crc kubenswrapper[4780]: I0219 10:21:15.939231 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:21:15 crc kubenswrapper[4780]: E0219 10:21:15.940300 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:28 crc kubenswrapper[4780]: I0219 10:21:28.938770 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:21:28 crc kubenswrapper[4780]: E0219 10:21:28.939767 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:41 crc kubenswrapper[4780]: I0219 10:21:41.942141 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:21:41 crc kubenswrapper[4780]: E0219 10:21:41.943173 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:55 crc kubenswrapper[4780]: I0219 10:21:55.939509 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:21:55 crc kubenswrapper[4780]: E0219 10:21:55.940762 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:21:56 crc kubenswrapper[4780]: I0219 10:21:56.407946 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae23b8f2-95fe-4b64-a80e-c6687b239734" containerID="aeb24262b2a9eefa218080400f81d587940736477046a94cf60e2bcb7694cb76" exitCode=0 Feb 19 10:21:56 crc kubenswrapper[4780]: I0219 10:21:56.408056 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" event={"ID":"ae23b8f2-95fe-4b64-a80e-c6687b239734","Type":"ContainerDied","Data":"aeb24262b2a9eefa218080400f81d587940736477046a94cf60e2bcb7694cb76"} Feb 19 10:21:57 crc kubenswrapper[4780]: I0219 10:21:57.956350 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.110520 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgsm\" (UniqueName: \"kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm\") pod \"ae23b8f2-95fe-4b64-a80e-c6687b239734\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.111070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory\") pod \"ae23b8f2-95fe-4b64-a80e-c6687b239734\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.111286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1\") pod \"ae23b8f2-95fe-4b64-a80e-c6687b239734\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.111753 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph\") pod \"ae23b8f2-95fe-4b64-a80e-c6687b239734\" (UID: \"ae23b8f2-95fe-4b64-a80e-c6687b239734\") " Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.121788 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm" (OuterVolumeSpecName: "kube-api-access-wmgsm") pod "ae23b8f2-95fe-4b64-a80e-c6687b239734" (UID: "ae23b8f2-95fe-4b64-a80e-c6687b239734"). InnerVolumeSpecName "kube-api-access-wmgsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.121821 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph" (OuterVolumeSpecName: "ceph") pod "ae23b8f2-95fe-4b64-a80e-c6687b239734" (UID: "ae23b8f2-95fe-4b64-a80e-c6687b239734"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.150650 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ae23b8f2-95fe-4b64-a80e-c6687b239734" (UID: "ae23b8f2-95fe-4b64-a80e-c6687b239734"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.153291 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory" (OuterVolumeSpecName: "inventory") pod "ae23b8f2-95fe-4b64-a80e-c6687b239734" (UID: "ae23b8f2-95fe-4b64-a80e-c6687b239734"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.226267 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.226328 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.226345 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgsm\" (UniqueName: \"kubernetes.io/projected/ae23b8f2-95fe-4b64-a80e-c6687b239734-kube-api-access-wmgsm\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.226378 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae23b8f2-95fe-4b64-a80e-c6687b239734-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.427602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" event={"ID":"ae23b8f2-95fe-4b64-a80e-c6687b239734","Type":"ContainerDied","Data":"3d9f838ff1f775002693ca655d7d3e53d0c82c123afa030ef6d8fd4fc4a17b07"} Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.427658 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9f838ff1f775002693ca655d7d3e53d0c82c123afa030ef6d8fd4fc4a17b07" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.427669 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6st2r" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.577155 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-lpwr6"] Feb 19 10:21:58 crc kubenswrapper[4780]: E0219 10:21:58.577928 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae23b8f2-95fe-4b64-a80e-c6687b239734" containerName="download-cache-openstack-openstack-cell1" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.577961 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae23b8f2-95fe-4b64-a80e-c6687b239734" containerName="download-cache-openstack-openstack-cell1" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.578840 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae23b8f2-95fe-4b64-a80e-c6687b239734" containerName="download-cache-openstack-openstack-cell1" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.582394 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.589812 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.589983 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.590205 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.591021 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.607958 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-lpwr6"] Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.741572 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.741661 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.741692 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7w9\" (UniqueName: \"kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.741791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.843869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.844043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.844098 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.844136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7w9\" (UniqueName: \"kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.848694 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.849328 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.849416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.866959 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7w9\" (UniqueName: \"kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9\") pod \"configure-network-openstack-openstack-cell1-lpwr6\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:58 crc kubenswrapper[4780]: I0219 10:21:58.916680 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:21:59 crc kubenswrapper[4780]: I0219 10:21:59.553562 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-lpwr6"] Feb 19 10:22:00 crc kubenswrapper[4780]: I0219 10:22:00.451159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" event={"ID":"8073fbd1-1d6a-4efc-bca3-733a5deec1b3","Type":"ContainerStarted","Data":"b3acd7d70fddb4f67bfea88d96c094a4af3f3735958fa7ae49c8c02d68f95200"} Feb 19 10:22:00 crc kubenswrapper[4780]: I0219 10:22:00.451547 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" event={"ID":"8073fbd1-1d6a-4efc-bca3-733a5deec1b3","Type":"ContainerStarted","Data":"de8994e0effaf32a1702ff4ee5d8e9e2bdafb251e3ec593fc7ca6705b60915f6"} Feb 19 10:22:00 crc kubenswrapper[4780]: I0219 10:22:00.476348 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" podStartSLOduration=2.0819319099999998 podStartE2EDuration="2.476310453s" podCreationTimestamp="2026-02-19 10:21:58 +0000 UTC" firstStartedPulling="2026-02-19 10:21:59.564649871 +0000 UTC m=+7262.308307320" lastFinishedPulling="2026-02-19 10:21:59.959028414 +0000 UTC m=+7262.702685863" observedRunningTime="2026-02-19 10:22:00.468529216 +0000 UTC m=+7263.212186685" watchObservedRunningTime="2026-02-19 10:22:00.476310453 +0000 UTC m=+7263.219967902" Feb 19 10:22:07 crc kubenswrapper[4780]: I0219 10:22:07.964666 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:22:07 crc kubenswrapper[4780]: E0219 10:22:07.965754 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:22:21 crc kubenswrapper[4780]: I0219 10:22:21.940474 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:22:21 crc kubenswrapper[4780]: E0219 10:22:21.941976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:22:32 crc kubenswrapper[4780]: I0219 10:22:32.940466 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:22:32 crc kubenswrapper[4780]: E0219 10:22:32.941914 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:22:44 crc kubenswrapper[4780]: I0219 10:22:44.939195 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:22:44 crc kubenswrapper[4780]: E0219 10:22:44.940198 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:22:44 crc kubenswrapper[4780]: I0219 10:22:44.978724 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:22:44 crc kubenswrapper[4780]: I0219 10:22:44.982351 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.005354 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.111892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrfj\" (UniqueName: \"kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.111967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.112026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.216013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrfj\" (UniqueName: \"kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.216089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.216210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.216819 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.216891 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.241591 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrfj\" (UniqueName: \"kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj\") pod \"redhat-operators-bcbj5\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.343289 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:45 crc kubenswrapper[4780]: I0219 10:22:45.972053 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:22:46 crc kubenswrapper[4780]: I0219 10:22:46.046965 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerStarted","Data":"a66decbfc5fa2aec5ab665cd4f2fe98fd87398fda90385c328346e35da4344e4"} Feb 19 10:22:46 crc kubenswrapper[4780]: E0219 10:22:46.525939 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c36941_d870_4eaa_9c30_2a7e9aea717b.slice/crio-conmon-c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:22:47 crc kubenswrapper[4780]: I0219 10:22:47.061374 4780 generic.go:334] "Generic (PLEG): container finished" podID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerID="c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde" exitCode=0 Feb 19 10:22:47 crc kubenswrapper[4780]: I0219 10:22:47.061490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerDied","Data":"c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde"} Feb 19 10:22:49 crc kubenswrapper[4780]: I0219 10:22:49.091647 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerStarted","Data":"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945"} Feb 19 10:22:52 crc kubenswrapper[4780]: I0219 10:22:52.130752 4780 generic.go:334] "Generic (PLEG): container finished" podID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerID="9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945" exitCode=0 Feb 19 10:22:52 crc kubenswrapper[4780]: I0219 10:22:52.131507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerDied","Data":"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945"} Feb 19 10:22:53 crc kubenswrapper[4780]: I0219 10:22:53.144106 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerStarted","Data":"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411"} Feb 19 10:22:53 crc kubenswrapper[4780]: I0219 10:22:53.168970 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcbj5" podStartSLOduration=3.678960143 podStartE2EDuration="9.168949845s" podCreationTimestamp="2026-02-19 10:22:44 +0000 UTC" firstStartedPulling="2026-02-19 10:22:47.066022001 +0000 UTC m=+7309.809679450" lastFinishedPulling="2026-02-19 10:22:52.556011703 +0000 UTC m=+7315.299669152" observedRunningTime="2026-02-19 10:22:53.165160155 +0000 UTC m=+7315.908817604" watchObservedRunningTime="2026-02-19 10:22:53.168949845 +0000 UTC m=+7315.912607294" Feb 19 10:22:55 crc kubenswrapper[4780]: I0219 10:22:55.343603 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:55 crc kubenswrapper[4780]: I0219 10:22:55.344453 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.061155 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.063531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.136897 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.157622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.157704 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.157732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtlk\" (UniqueName: \"kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.260777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.260836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.260857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtlk\" (UniqueName: \"kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.261517 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.261581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.291570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtlk\" (UniqueName: \"kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk\") pod \"redhat-marketplace-5glqs\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.385726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.425845 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bcbj5" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="registry-server" probeResult="failure" output=< Feb 19 10:22:56 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:22:56 crc kubenswrapper[4780]: > Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.946593 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:22:56 crc kubenswrapper[4780]: E0219 10:22:56.947953 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:22:56 crc kubenswrapper[4780]: I0219 10:22:56.959193 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:22:57 crc kubenswrapper[4780]: I0219 10:22:57.187767 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerStarted","Data":"90076c607f7dfe2bc9640dca3ab857ff45fcbf024b14327d0412bf65ea74f7aa"} Feb 19 10:22:58 crc kubenswrapper[4780]: I0219 10:22:58.199651 4780 generic.go:334] "Generic (PLEG): container finished" podID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerID="f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e" exitCode=0 Feb 19 10:22:58 crc kubenswrapper[4780]: I0219 10:22:58.199799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerDied","Data":"f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e"} Feb 19 10:22:59 crc kubenswrapper[4780]: I0219 10:22:59.216996 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerStarted","Data":"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7"} Feb 19 10:23:01 crc kubenswrapper[4780]: I0219 10:23:01.246042 4780 generic.go:334] "Generic (PLEG): container finished" podID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerID="74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7" exitCode=0 Feb 19 10:23:01 crc kubenswrapper[4780]: I0219 10:23:01.246397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerDied","Data":"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7"} Feb 19 10:23:02 crc kubenswrapper[4780]: I0219 10:23:02.260335 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerStarted","Data":"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813"} Feb 19 10:23:02 crc kubenswrapper[4780]: I0219 10:23:02.298598 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5glqs" podStartSLOduration=2.7869785929999997 podStartE2EDuration="6.29857234s" podCreationTimestamp="2026-02-19 10:22:56 +0000 UTC" firstStartedPulling="2026-02-19 10:22:58.203434842 +0000 UTC m=+7320.947092291" lastFinishedPulling="2026-02-19 10:23:01.715028589 +0000 UTC m=+7324.458686038" observedRunningTime="2026-02-19 10:23:02.283342325 +0000 UTC m=+7325.026999794" watchObservedRunningTime="2026-02-19 10:23:02.29857234 +0000 UTC m=+7325.042229789" Feb 19 10:23:05 crc kubenswrapper[4780]: I0219 10:23:05.400232 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:23:05 crc kubenswrapper[4780]: I0219 10:23:05.458743 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:23:05 crc kubenswrapper[4780]: I0219 10:23:05.653656 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:23:06 crc kubenswrapper[4780]: I0219 10:23:06.386079 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:06 crc kubenswrapper[4780]: I0219 10:23:06.386179 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:06 crc kubenswrapper[4780]: I0219 10:23:06.440560 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:07 crc kubenswrapper[4780]: I0219 10:23:07.331247 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcbj5" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="registry-server" containerID="cri-o://fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411" gracePeriod=2 Feb 19 10:23:07 crc kubenswrapper[4780]: I0219 10:23:07.401197 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:07 crc kubenswrapper[4780]: I0219 10:23:07.922056 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:07.999495 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities\") pod \"19c36941-d870-4eaa-9c30-2a7e9aea717b\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:07.999703 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content\") pod \"19c36941-d870-4eaa-9c30-2a7e9aea717b\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:07.999780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrrfj\" (UniqueName: \"kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj\") pod \"19c36941-d870-4eaa-9c30-2a7e9aea717b\" (UID: \"19c36941-d870-4eaa-9c30-2a7e9aea717b\") " Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.000569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities" (OuterVolumeSpecName: "utilities") pod "19c36941-d870-4eaa-9c30-2a7e9aea717b" (UID: "19c36941-d870-4eaa-9c30-2a7e9aea717b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.014586 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj" (OuterVolumeSpecName: "kube-api-access-wrrfj") pod "19c36941-d870-4eaa-9c30-2a7e9aea717b" (UID: "19c36941-d870-4eaa-9c30-2a7e9aea717b"). InnerVolumeSpecName "kube-api-access-wrrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.058675 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.105432 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrrfj\" (UniqueName: \"kubernetes.io/projected/19c36941-d870-4eaa-9c30-2a7e9aea717b-kube-api-access-wrrfj\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.105465 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.148328 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19c36941-d870-4eaa-9c30-2a7e9aea717b" (UID: "19c36941-d870-4eaa-9c30-2a7e9aea717b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.207879 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c36941-d870-4eaa-9c30-2a7e9aea717b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.351443 4780 generic.go:334] "Generic (PLEG): container finished" podID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerID="fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411" exitCode=0 Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.351530 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbj5" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.351547 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerDied","Data":"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411"} Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.351688 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbj5" event={"ID":"19c36941-d870-4eaa-9c30-2a7e9aea717b","Type":"ContainerDied","Data":"a66decbfc5fa2aec5ab665cd4f2fe98fd87398fda90385c328346e35da4344e4"} Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.351756 4780 scope.go:117] "RemoveContainer" containerID="fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.401472 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.401917 4780 scope.go:117] "RemoveContainer" containerID="9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.418213 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcbj5"] Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.441293 4780 scope.go:117] "RemoveContainer" containerID="c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.484016 4780 scope.go:117] "RemoveContainer" containerID="fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411" Feb 19 10:23:08 crc kubenswrapper[4780]: E0219 10:23:08.484695 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411\": container with ID starting with fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411 not found: ID does not exist" containerID="fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.484742 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411"} err="failed to get container status \"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411\": rpc error: code = NotFound desc = could not find container \"fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411\": container with ID starting with fb509358cdca871359c633de3eefe6d11faafea388158aa07b3a5bfc590b5411 not found: ID does not exist" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.484779 4780 scope.go:117] "RemoveContainer" containerID="9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945" Feb 19 10:23:08 crc kubenswrapper[4780]: E0219 10:23:08.485709 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945\": container with ID starting with 9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945 not found: ID does not exist" containerID="9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.485807 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945"} err="failed to get container status \"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945\": rpc error: code = NotFound desc = could not find container \"9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945\": container with ID starting with 9669bd738bded711f84618e68f134f74878bc978cf2803185c1df602980b1945 not found: ID does not exist" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.485878 4780 scope.go:117] "RemoveContainer" containerID="c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde" Feb 19 10:23:08 crc kubenswrapper[4780]: E0219 10:23:08.486403 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde\": container with ID starting with c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde not found: ID does not exist" containerID="c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde" Feb 19 10:23:08 crc kubenswrapper[4780]: I0219 10:23:08.486464 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde"} err="failed to get container status \"c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde\": rpc error: code = NotFound desc = could not find container \"c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde\": container with ID starting with c910d416272cc3a54d72b00761e9589cb3f277e0c3cd3eeaf966ba9b19d2ccde not found: ID does not exist" Feb 19 10:23:09 crc kubenswrapper[4780]: I0219 10:23:09.362102 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5glqs" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="registry-server" containerID="cri-o://d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813" gracePeriod=2 Feb 19 10:23:09 crc kubenswrapper[4780]: I0219 10:23:09.961103 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" path="/var/lib/kubelet/pods/19c36941-d870-4eaa-9c30-2a7e9aea717b/volumes" Feb 19 10:23:09 crc kubenswrapper[4780]: I0219 10:23:09.966743 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.057567 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities\") pod \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.057678 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content\") pod \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.058088 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjtlk\" (UniqueName: \"kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk\") pod \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\" (UID: \"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6\") " Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.059105 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities" (OuterVolumeSpecName: "utilities") pod "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" (UID: "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.067250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk" (OuterVolumeSpecName: "kube-api-access-kjtlk") pod "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" (UID: "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6"). InnerVolumeSpecName "kube-api-access-kjtlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.081262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" (UID: "19fcf2d7-ea07-43eb-addf-7eaf3bc666b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.161326 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.161374 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.161389 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjtlk\" (UniqueName: \"kubernetes.io/projected/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6-kube-api-access-kjtlk\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.376166 4780 generic.go:334] "Generic (PLEG): container finished" podID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerID="d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813" exitCode=0 Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.376214 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5glqs" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.376269 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerDied","Data":"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813"} Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.377775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5glqs" event={"ID":"19fcf2d7-ea07-43eb-addf-7eaf3bc666b6","Type":"ContainerDied","Data":"90076c607f7dfe2bc9640dca3ab857ff45fcbf024b14327d0412bf65ea74f7aa"} Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.377810 4780 scope.go:117] "RemoveContainer" containerID="d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.421950 4780 scope.go:117] "RemoveContainer" containerID="74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.422054 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.436037 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5glqs"] Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.451835 4780 scope.go:117] "RemoveContainer" containerID="f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.495401 4780 scope.go:117] "RemoveContainer" containerID="d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813" Feb 19 10:23:10 crc kubenswrapper[4780]: E0219 10:23:10.495895 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813\": container with ID starting with d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813 not found: ID does not exist" containerID="d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.495952 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813"} err="failed to get container status \"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813\": rpc error: code = NotFound desc = could not find container \"d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813\": container with ID starting with d48ecbba0aac3313f51d3976ef9f47e5d4567f6f1825d279aa6fccbb04caf813 not found: ID does not exist" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.495984 4780 scope.go:117] "RemoveContainer" containerID="74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7" Feb 19 10:23:10 crc kubenswrapper[4780]: E0219 10:23:10.496472 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7\": container with ID starting with 74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7 not found: ID does not exist" containerID="74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.496512 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7"} err="failed to get container status \"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7\": rpc error: code = NotFound desc = could not find container \"74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7\": container with ID starting with 74238fc586e95554d1fef5d06d67e5cb9e6490f81c9c68aea2d4078ed838e3e7 not found: ID does not exist" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.496537 4780 scope.go:117] "RemoveContainer" containerID="f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e" Feb 19 10:23:10 crc kubenswrapper[4780]: E0219 10:23:10.496792 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e\": container with ID starting with f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e not found: ID does not exist" containerID="f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.496815 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e"} err="failed to get container status \"f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e\": rpc error: code = NotFound desc = could not find container \"f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e\": container with ID starting with f5aecf10fb935f2974ce95002896b2d0ff0b2f28b2d3120b0c3b94a78fbe898e not found: ID does not exist" Feb 19 10:23:10 crc kubenswrapper[4780]: I0219 10:23:10.939041 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:23:10 crc kubenswrapper[4780]: E0219 10:23:10.939855 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:23:11 crc kubenswrapper[4780]: I0219 10:23:11.963051 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" path="/var/lib/kubelet/pods/19fcf2d7-ea07-43eb-addf-7eaf3bc666b6/volumes" Feb 19 10:23:24 crc kubenswrapper[4780]: I0219 10:23:24.938809 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:23:24 crc kubenswrapper[4780]: E0219 10:23:24.940182 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:23:26 crc kubenswrapper[4780]: I0219 10:23:26.564712 4780 generic.go:334] "Generic (PLEG): container finished" podID="8073fbd1-1d6a-4efc-bca3-733a5deec1b3" containerID="b3acd7d70fddb4f67bfea88d96c094a4af3f3735958fa7ae49c8c02d68f95200" exitCode=0 Feb 19 10:23:26 crc kubenswrapper[4780]: I0219 10:23:26.564772 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" event={"ID":"8073fbd1-1d6a-4efc-bca3-733a5deec1b3","Type":"ContainerDied","Data":"b3acd7d70fddb4f67bfea88d96c094a4af3f3735958fa7ae49c8c02d68f95200"} Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.112008 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.181278 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv7w9\" (UniqueName: \"kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9\") pod \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.181454 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph\") pod \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.181544 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1\") pod \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.181713 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory\") pod \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\" (UID: \"8073fbd1-1d6a-4efc-bca3-733a5deec1b3\") " Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.187946 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9" (OuterVolumeSpecName: "kube-api-access-pv7w9") pod "8073fbd1-1d6a-4efc-bca3-733a5deec1b3" (UID: "8073fbd1-1d6a-4efc-bca3-733a5deec1b3"). InnerVolumeSpecName "kube-api-access-pv7w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.188354 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph" (OuterVolumeSpecName: "ceph") pod "8073fbd1-1d6a-4efc-bca3-733a5deec1b3" (UID: "8073fbd1-1d6a-4efc-bca3-733a5deec1b3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.215648 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8073fbd1-1d6a-4efc-bca3-733a5deec1b3" (UID: "8073fbd1-1d6a-4efc-bca3-733a5deec1b3"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.215889 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory" (OuterVolumeSpecName: "inventory") pod "8073fbd1-1d6a-4efc-bca3-733a5deec1b3" (UID: "8073fbd1-1d6a-4efc-bca3-733a5deec1b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.285095 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv7w9\" (UniqueName: \"kubernetes.io/projected/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-kube-api-access-pv7w9\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.285190 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.285202 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.285214 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8073fbd1-1d6a-4efc-bca3-733a5deec1b3-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.616567 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" event={"ID":"8073fbd1-1d6a-4efc-bca3-733a5deec1b3","Type":"ContainerDied","Data":"de8994e0effaf32a1702ff4ee5d8e9e2bdafb251e3ec593fc7ca6705b60915f6"} Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.617051 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de8994e0effaf32a1702ff4ee5d8e9e2bdafb251e3ec593fc7ca6705b60915f6" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.616705 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-lpwr6" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.706539 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-6l66j"] Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.708942 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="extract-content" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.709076 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="extract-content" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.709203 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8073fbd1-1d6a-4efc-bca3-733a5deec1b3" containerName="configure-network-openstack-openstack-cell1" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.709290 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8073fbd1-1d6a-4efc-bca3-733a5deec1b3" containerName="configure-network-openstack-openstack-cell1" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.709389 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.709463 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.709559 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.709646 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.709735 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="extract-utilities" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.713976 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="extract-utilities" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.714242 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="extract-content" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.714336 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="extract-content" Feb 19 10:23:28 crc kubenswrapper[4780]: E0219 10:23:28.714460 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="extract-utilities" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.714542 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="extract-utilities" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.715270 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c36941-d870-4eaa-9c30-2a7e9aea717b" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.715371 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fcf2d7-ea07-43eb-addf-7eaf3bc666b6" containerName="registry-server" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.715478 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8073fbd1-1d6a-4efc-bca3-733a5deec1b3" containerName="configure-network-openstack-openstack-cell1" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.717727 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.720281 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.721111 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.721540 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.721743 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.730924 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-6l66j"] Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.800771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.801260 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.801398 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8btm\" (UniqueName: \"kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.801511 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.903707 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.904105 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8btm\" (UniqueName: \"kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.904278 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.904538 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.907905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.908333 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.909314 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:28 crc kubenswrapper[4780]: I0219 10:23:28.928558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8btm\" (UniqueName: \"kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm\") pod \"validate-network-openstack-openstack-cell1-6l66j\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:29 crc kubenswrapper[4780]: I0219 10:23:29.048387 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:29 crc kubenswrapper[4780]: I0219 10:23:29.677359 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-6l66j"] Feb 19 10:23:30 crc kubenswrapper[4780]: I0219 10:23:30.640530 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" event={"ID":"3d28700e-0688-4901-85e5-cbef8194588b","Type":"ContainerStarted","Data":"5c5896df562c26e8a829a4a0d5036b9657d5e8e436b41e0f79899e4cac33bc82"} Feb 19 10:23:30 crc kubenswrapper[4780]: I0219 10:23:30.641257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" event={"ID":"3d28700e-0688-4901-85e5-cbef8194588b","Type":"ContainerStarted","Data":"1fd9311c5b960e57f4f687cbb68ce2f124b43a466e3004b928bad82e4ecc2950"} Feb 19 10:23:30 crc kubenswrapper[4780]: I0219 10:23:30.664533 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" podStartSLOduration=2.250151028 podStartE2EDuration="2.664506202s" podCreationTimestamp="2026-02-19 10:23:28 +0000 UTC" firstStartedPulling="2026-02-19 10:23:29.687983206 +0000 UTC m=+7352.431640655" lastFinishedPulling="2026-02-19 10:23:30.10233838 +0000 UTC m=+7352.845995829" observedRunningTime="2026-02-19 10:23:30.65840923 +0000 UTC m=+7353.402066689" watchObservedRunningTime="2026-02-19 10:23:30.664506202 +0000 UTC m=+7353.408163651" Feb 19 10:23:35 crc kubenswrapper[4780]: I0219 10:23:35.700872 4780 generic.go:334] "Generic (PLEG): container finished" podID="3d28700e-0688-4901-85e5-cbef8194588b" containerID="5c5896df562c26e8a829a4a0d5036b9657d5e8e436b41e0f79899e4cac33bc82" exitCode=0 Feb 19 10:23:35 crc kubenswrapper[4780]: I0219 10:23:35.700972 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" event={"ID":"3d28700e-0688-4901-85e5-cbef8194588b","Type":"ContainerDied","Data":"5c5896df562c26e8a829a4a0d5036b9657d5e8e436b41e0f79899e4cac33bc82"} Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.221743 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.348783 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1\") pod \"3d28700e-0688-4901-85e5-cbef8194588b\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.348911 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8btm\" (UniqueName: \"kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm\") pod \"3d28700e-0688-4901-85e5-cbef8194588b\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.349165 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph\") pod \"3d28700e-0688-4901-85e5-cbef8194588b\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.349248 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory\") pod \"3d28700e-0688-4901-85e5-cbef8194588b\" (UID: \"3d28700e-0688-4901-85e5-cbef8194588b\") " Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.355913 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph" (OuterVolumeSpecName: "ceph") pod "3d28700e-0688-4901-85e5-cbef8194588b" (UID: "3d28700e-0688-4901-85e5-cbef8194588b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.357089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm" (OuterVolumeSpecName: "kube-api-access-g8btm") pod "3d28700e-0688-4901-85e5-cbef8194588b" (UID: "3d28700e-0688-4901-85e5-cbef8194588b"). InnerVolumeSpecName "kube-api-access-g8btm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.386538 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory" (OuterVolumeSpecName: "inventory") pod "3d28700e-0688-4901-85e5-cbef8194588b" (UID: "3d28700e-0688-4901-85e5-cbef8194588b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.392248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "3d28700e-0688-4901-85e5-cbef8194588b" (UID: "3d28700e-0688-4901-85e5-cbef8194588b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.452983 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.453035 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8btm\" (UniqueName: \"kubernetes.io/projected/3d28700e-0688-4901-85e5-cbef8194588b-kube-api-access-g8btm\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.453048 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.453063 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d28700e-0688-4901-85e5-cbef8194588b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.729584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" event={"ID":"3d28700e-0688-4901-85e5-cbef8194588b","Type":"ContainerDied","Data":"1fd9311c5b960e57f4f687cbb68ce2f124b43a466e3004b928bad82e4ecc2950"} Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.729965 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd9311c5b960e57f4f687cbb68ce2f124b43a466e3004b928bad82e4ecc2950" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.729749 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-6l66j" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.813937 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-kkk8v"] Feb 19 10:23:37 crc kubenswrapper[4780]: E0219 10:23:37.814773 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d28700e-0688-4901-85e5-cbef8194588b" containerName="validate-network-openstack-openstack-cell1" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.814861 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d28700e-0688-4901-85e5-cbef8194588b" containerName="validate-network-openstack-openstack-cell1" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.815163 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d28700e-0688-4901-85e5-cbef8194588b" containerName="validate-network-openstack-openstack-cell1" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.816110 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.819506 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.820539 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.820632 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.821230 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.826638 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-kkk8v"] Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.871547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.871615 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wk8\" (UniqueName: \"kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.871681 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.871762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.974870 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.974943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wk8\" (UniqueName: \"kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.974996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.975164 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.982445 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.982698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.982762 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:37 crc kubenswrapper[4780]: I0219 10:23:37.993222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wk8\" (UniqueName: \"kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8\") pod \"install-os-openstack-openstack-cell1-kkk8v\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:38 crc kubenswrapper[4780]: I0219 10:23:38.181185 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:23:38 crc kubenswrapper[4780]: I0219 10:23:38.832007 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-kkk8v"] Feb 19 10:23:38 crc kubenswrapper[4780]: I0219 10:23:38.939412 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:23:39 crc kubenswrapper[4780]: I0219 10:23:39.751804 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" event={"ID":"3e92bcc6-d6ee-4a29-ad83-37b71a80085f","Type":"ContainerStarted","Data":"8648d2c8517cbe23c0f8fe6ce1ab77a3d32526854b2624eefb3a33757a7182b1"} Feb 19 10:23:39 crc kubenswrapper[4780]: I0219 10:23:39.761747 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce"} Feb 19 10:23:40 crc kubenswrapper[4780]: I0219 10:23:40.776911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" event={"ID":"3e92bcc6-d6ee-4a29-ad83-37b71a80085f","Type":"ContainerStarted","Data":"7684c656560ca1f6b8b918f38d6501f81084e8ee638bc83ed0aef672b48addda"} Feb 19 10:23:40 crc kubenswrapper[4780]: I0219 10:23:40.813424 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" podStartSLOduration=3.174325081 podStartE2EDuration="3.813395898s" podCreationTimestamp="2026-02-19 10:23:37 +0000 UTC" firstStartedPulling="2026-02-19 10:23:38.836840331 +0000 UTC m=+7361.580497780" lastFinishedPulling="2026-02-19 10:23:39.475911138 +0000 UTC m=+7362.219568597" observedRunningTime="2026-02-19 10:23:40.803355101 +0000 UTC m=+7363.547012550" watchObservedRunningTime="2026-02-19 10:23:40.813395898 +0000 UTC m=+7363.557053357" Feb 19 10:24:26 crc kubenswrapper[4780]: I0219 10:24:26.319417 4780 generic.go:334] "Generic (PLEG): container finished" podID="3e92bcc6-d6ee-4a29-ad83-37b71a80085f" containerID="7684c656560ca1f6b8b918f38d6501f81084e8ee638bc83ed0aef672b48addda" exitCode=0 Feb 19 10:24:26 crc kubenswrapper[4780]: I0219 10:24:26.319496 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" event={"ID":"3e92bcc6-d6ee-4a29-ad83-37b71a80085f","Type":"ContainerDied","Data":"7684c656560ca1f6b8b918f38d6501f81084e8ee638bc83ed0aef672b48addda"} Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.909389 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.966236 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1\") pod \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.966328 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph\") pod \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.966645 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2wk8\" (UniqueName: \"kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8\") pod \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.966857 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory\") pod \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\" (UID: \"3e92bcc6-d6ee-4a29-ad83-37b71a80085f\") " Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.989468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8" (OuterVolumeSpecName: "kube-api-access-s2wk8") pod "3e92bcc6-d6ee-4a29-ad83-37b71a80085f" (UID: "3e92bcc6-d6ee-4a29-ad83-37b71a80085f"). InnerVolumeSpecName "kube-api-access-s2wk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:24:27 crc kubenswrapper[4780]: I0219 10:24:27.994339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph" (OuterVolumeSpecName: "ceph") pod "3e92bcc6-d6ee-4a29-ad83-37b71a80085f" (UID: "3e92bcc6-d6ee-4a29-ad83-37b71a80085f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.044505 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory" (OuterVolumeSpecName: "inventory") pod "3e92bcc6-d6ee-4a29-ad83-37b71a80085f" (UID: "3e92bcc6-d6ee-4a29-ad83-37b71a80085f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.101511 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "3e92bcc6-d6ee-4a29-ad83-37b71a80085f" (UID: "3e92bcc6-d6ee-4a29-ad83-37b71a80085f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.106569 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.106616 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.106628 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.106638 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2wk8\" (UniqueName: \"kubernetes.io/projected/3e92bcc6-d6ee-4a29-ad83-37b71a80085f-kube-api-access-s2wk8\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.342583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" event={"ID":"3e92bcc6-d6ee-4a29-ad83-37b71a80085f","Type":"ContainerDied","Data":"8648d2c8517cbe23c0f8fe6ce1ab77a3d32526854b2624eefb3a33757a7182b1"} Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.342646 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8648d2c8517cbe23c0f8fe6ce1ab77a3d32526854b2624eefb3a33757a7182b1" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.342773 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-kkk8v" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.449601 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2cn8b"] Feb 19 10:24:28 crc kubenswrapper[4780]: E0219 10:24:28.450784 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e92bcc6-d6ee-4a29-ad83-37b71a80085f" containerName="install-os-openstack-openstack-cell1" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.450820 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e92bcc6-d6ee-4a29-ad83-37b71a80085f" containerName="install-os-openstack-openstack-cell1" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.453568 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e92bcc6-d6ee-4a29-ad83-37b71a80085f" containerName="install-os-openstack-openstack-cell1" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.454771 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.458211 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.459356 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.459462 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.459689 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.468192 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2cn8b"] Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.621274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmxc\" (UniqueName: \"kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.621504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.621540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.621579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.724750 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.724810 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.724857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.724947 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmxc\" (UniqueName: \"kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.732574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.737784 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.745220 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.746564 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmxc\" (UniqueName: \"kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc\") pod \"configure-os-openstack-openstack-cell1-2cn8b\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:28 crc kubenswrapper[4780]: I0219 10:24:28.781612 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:24:29 crc kubenswrapper[4780]: W0219 10:24:29.459374 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0bf8c8_26a1_4c08_9b45_859f9bc8d65c.slice/crio-477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd WatchSource:0}: Error finding container 477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd: Status 404 returned error can't find the container with id 477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd Feb 19 10:24:29 crc kubenswrapper[4780]: I0219 10:24:29.468702 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2cn8b"] Feb 19 10:24:30 crc kubenswrapper[4780]: I0219 10:24:30.367670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" event={"ID":"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c","Type":"ContainerStarted","Data":"00c4f1fb21d3bb5ab027d0a31bc93a1d4874205aec5bf6abb754e918123fe13c"} Feb 19 10:24:30 crc kubenswrapper[4780]: I0219 10:24:30.368864 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" event={"ID":"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c","Type":"ContainerStarted","Data":"477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd"} Feb 19 10:24:30 crc kubenswrapper[4780]: I0219 10:24:30.402223 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" podStartSLOduration=1.974363839 podStartE2EDuration="2.402189401s" podCreationTimestamp="2026-02-19 10:24:28 +0000 UTC" firstStartedPulling="2026-02-19 10:24:29.47554089 +0000 UTC m=+7412.219198339" lastFinishedPulling="2026-02-19 10:24:29.903366452 +0000 UTC m=+7412.647023901" observedRunningTime="2026-02-19 10:24:30.395345379 +0000 UTC m=+7413.139002838" watchObservedRunningTime="2026-02-19 10:24:30.402189401 +0000 UTC m=+7413.145846850" Feb 19 10:25:14 crc kubenswrapper[4780]: I0219 10:25:14.869764 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" containerID="00c4f1fb21d3bb5ab027d0a31bc93a1d4874205aec5bf6abb754e918123fe13c" exitCode=0 Feb 19 10:25:14 crc kubenswrapper[4780]: I0219 10:25:14.869869 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" event={"ID":"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c","Type":"ContainerDied","Data":"00c4f1fb21d3bb5ab027d0a31bc93a1d4874205aec5bf6abb754e918123fe13c"} Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.398097 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.599053 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph\") pod \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.599361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmxc\" (UniqueName: \"kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc\") pod \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.599572 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1\") pod \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.600656 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory\") pod \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\" (UID: \"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c\") " Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.607399 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc" (OuterVolumeSpecName: "kube-api-access-8hmxc") pod "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" (UID: "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c"). InnerVolumeSpecName "kube-api-access-8hmxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.607485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph" (OuterVolumeSpecName: "ceph") pod "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" (UID: "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.633264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory" (OuterVolumeSpecName: "inventory") pod "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" (UID: "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.634782 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" (UID: "ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.703821 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmxc\" (UniqueName: \"kubernetes.io/projected/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-kube-api-access-8hmxc\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.704255 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.704382 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.704486 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.895508 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" event={"ID":"ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c","Type":"ContainerDied","Data":"477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd"} Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.895895 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477a819438895d677379178e6dae51a6caf254cd8f09e91cf822c03b882171cd" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.895614 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2cn8b" Feb 19 10:25:16 crc kubenswrapper[4780]: I0219 10:25:16.999666 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-k2pj5"] Feb 19 10:25:17 crc kubenswrapper[4780]: E0219 10:25:17.000329 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" containerName="configure-os-openstack-openstack-cell1" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.000360 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" containerName="configure-os-openstack-openstack-cell1" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.000642 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c" containerName="configure-os-openstack-openstack-cell1" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.001922 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.007963 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.008298 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.008476 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.008679 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.023321 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-k2pj5"] Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.113271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.115240 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.115549 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.115619 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75n8\" (UniqueName: \"kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.218007 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.218624 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.218863 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d75n8\" (UniqueName: \"kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.219070 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.223546 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.223546 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.223553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.248249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d75n8\" (UniqueName: \"kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8\") pod \"ssh-known-hosts-openstack-k2pj5\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.328682 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.973855 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-k2pj5"] Feb 19 10:25:17 crc kubenswrapper[4780]: I0219 10:25:17.983385 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:25:18 crc kubenswrapper[4780]: I0219 10:25:18.937601 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-k2pj5" event={"ID":"4f14b4e2-5e66-494b-ac59-559917345d5e","Type":"ContainerStarted","Data":"f7d806425e31a59dc7416e0013e9c8b10c6cb111641d4951e49904c4213d3353"} Feb 19 10:25:19 crc kubenswrapper[4780]: I0219 10:25:19.953653 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-k2pj5" event={"ID":"4f14b4e2-5e66-494b-ac59-559917345d5e","Type":"ContainerStarted","Data":"e48cccff2dcafe64cafe5b37cd3afd7ea100ba79bda1758d9b66819e2362b4b6"} Feb 19 10:25:19 crc kubenswrapper[4780]: I0219 10:25:19.976199 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-k2pj5" podStartSLOduration=2.962643018 podStartE2EDuration="3.976181698s" podCreationTimestamp="2026-02-19 10:25:16 +0000 UTC" firstStartedPulling="2026-02-19 10:25:17.98292967 +0000 UTC m=+7460.726587119" lastFinishedPulling="2026-02-19 10:25:18.99646835 +0000 UTC m=+7461.740125799" observedRunningTime="2026-02-19 10:25:19.971087963 +0000 UTC m=+7462.714745412" watchObservedRunningTime="2026-02-19 10:25:19.976181698 +0000 UTC m=+7462.719839147" Feb 19 10:25:29 crc kubenswrapper[4780]: I0219 10:25:29.084958 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f14b4e2-5e66-494b-ac59-559917345d5e" containerID="e48cccff2dcafe64cafe5b37cd3afd7ea100ba79bda1758d9b66819e2362b4b6" exitCode=0 Feb 19 10:25:29 crc kubenswrapper[4780]: I0219 10:25:29.085067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-k2pj5" event={"ID":"4f14b4e2-5e66-494b-ac59-559917345d5e","Type":"ContainerDied","Data":"e48cccff2dcafe64cafe5b37cd3afd7ea100ba79bda1758d9b66819e2362b4b6"} Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.622862 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.786021 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph\") pod \"4f14b4e2-5e66-494b-ac59-559917345d5e\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.786098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0\") pod \"4f14b4e2-5e66-494b-ac59-559917345d5e\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.786467 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d75n8\" (UniqueName: \"kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8\") pod \"4f14b4e2-5e66-494b-ac59-559917345d5e\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.786547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1\") pod \"4f14b4e2-5e66-494b-ac59-559917345d5e\" (UID: \"4f14b4e2-5e66-494b-ac59-559917345d5e\") " Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.795530 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph" (OuterVolumeSpecName: "ceph") pod "4f14b4e2-5e66-494b-ac59-559917345d5e" (UID: "4f14b4e2-5e66-494b-ac59-559917345d5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.799398 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8" (OuterVolumeSpecName: "kube-api-access-d75n8") pod "4f14b4e2-5e66-494b-ac59-559917345d5e" (UID: "4f14b4e2-5e66-494b-ac59-559917345d5e"). InnerVolumeSpecName "kube-api-access-d75n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.819010 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4f14b4e2-5e66-494b-ac59-559917345d5e" (UID: "4f14b4e2-5e66-494b-ac59-559917345d5e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.821113 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4f14b4e2-5e66-494b-ac59-559917345d5e" (UID: "4f14b4e2-5e66-494b-ac59-559917345d5e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.890789 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.891020 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.891047 4780 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4f14b4e2-5e66-494b-ac59-559917345d5e-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:30 crc kubenswrapper[4780]: I0219 10:25:30.891081 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d75n8\" (UniqueName: \"kubernetes.io/projected/4f14b4e2-5e66-494b-ac59-559917345d5e-kube-api-access-d75n8\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.124543 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-k2pj5" event={"ID":"4f14b4e2-5e66-494b-ac59-559917345d5e","Type":"ContainerDied","Data":"f7d806425e31a59dc7416e0013e9c8b10c6cb111641d4951e49904c4213d3353"} Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.124603 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-k2pj5" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.124606 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7d806425e31a59dc7416e0013e9c8b10c6cb111641d4951e49904c4213d3353" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.243047 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dbmsk"] Feb 19 10:25:31 crc kubenswrapper[4780]: E0219 10:25:31.244427 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f14b4e2-5e66-494b-ac59-559917345d5e" containerName="ssh-known-hosts-openstack" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.244455 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f14b4e2-5e66-494b-ac59-559917345d5e" containerName="ssh-known-hosts-openstack" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.244779 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f14b4e2-5e66-494b-ac59-559917345d5e" containerName="ssh-known-hosts-openstack" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.246033 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.253579 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.253708 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.254141 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.254144 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.278807 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dbmsk"] Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.406821 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.406900 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.407066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrjgl\" (UniqueName: \"kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.407272 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.510801 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.510875 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.510967 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrjgl\" (UniqueName: \"kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.511014 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.518597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.528906 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.530036 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.534582 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrjgl\" (UniqueName: \"kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl\") pod \"run-os-openstack-openstack-cell1-dbmsk\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:31 crc kubenswrapper[4780]: I0219 10:25:31.572280 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:32 crc kubenswrapper[4780]: I0219 10:25:32.330758 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dbmsk"] Feb 19 10:25:33 crc kubenswrapper[4780]: I0219 10:25:33.148468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" event={"ID":"c5a28a0e-38b0-4936-be75-0b2e880b0696","Type":"ContainerStarted","Data":"5c01c7d7394052e2c5439a7f0528fdf1301a8643fc1a252f4fa93ada9f9a289d"} Feb 19 10:25:34 crc kubenswrapper[4780]: I0219 10:25:34.161397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" event={"ID":"c5a28a0e-38b0-4936-be75-0b2e880b0696","Type":"ContainerStarted","Data":"ebd98cd5cc7a9d9839edfd54b63e83ae6062b90a110aff82dcb25e6cbd52e1f8"} Feb 19 10:25:34 crc kubenswrapper[4780]: I0219 10:25:34.186170 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" podStartSLOduration=1.681746513 podStartE2EDuration="3.186108898s" podCreationTimestamp="2026-02-19 10:25:31 +0000 UTC" firstStartedPulling="2026-02-19 10:25:32.336725212 +0000 UTC m=+7475.080382661" lastFinishedPulling="2026-02-19 10:25:33.841087597 +0000 UTC m=+7476.584745046" observedRunningTime="2026-02-19 10:25:34.182023689 +0000 UTC m=+7476.925681158" watchObservedRunningTime="2026-02-19 10:25:34.186108898 +0000 UTC m=+7476.929766347" Feb 19 10:25:43 crc kubenswrapper[4780]: I0219 10:25:43.285258 4780 generic.go:334] "Generic (PLEG): container finished" podID="c5a28a0e-38b0-4936-be75-0b2e880b0696" containerID="ebd98cd5cc7a9d9839edfd54b63e83ae6062b90a110aff82dcb25e6cbd52e1f8" exitCode=0 Feb 19 10:25:43 crc kubenswrapper[4780]: I0219 10:25:43.285372 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" event={"ID":"c5a28a0e-38b0-4936-be75-0b2e880b0696","Type":"ContainerDied","Data":"ebd98cd5cc7a9d9839edfd54b63e83ae6062b90a110aff82dcb25e6cbd52e1f8"} Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.870863 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.989852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph\") pod \"c5a28a0e-38b0-4936-be75-0b2e880b0696\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.989911 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory\") pod \"c5a28a0e-38b0-4936-be75-0b2e880b0696\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.990326 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1\") pod \"c5a28a0e-38b0-4936-be75-0b2e880b0696\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.990419 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrjgl\" (UniqueName: \"kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl\") pod \"c5a28a0e-38b0-4936-be75-0b2e880b0696\" (UID: \"c5a28a0e-38b0-4936-be75-0b2e880b0696\") " Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.999392 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph" (OuterVolumeSpecName: "ceph") pod "c5a28a0e-38b0-4936-be75-0b2e880b0696" (UID: "c5a28a0e-38b0-4936-be75-0b2e880b0696"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:44 crc kubenswrapper[4780]: I0219 10:25:44.999513 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl" (OuterVolumeSpecName: "kube-api-access-qrjgl") pod "c5a28a0e-38b0-4936-be75-0b2e880b0696" (UID: "c5a28a0e-38b0-4936-be75-0b2e880b0696"). InnerVolumeSpecName "kube-api-access-qrjgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.024365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c5a28a0e-38b0-4936-be75-0b2e880b0696" (UID: "c5a28a0e-38b0-4936-be75-0b2e880b0696"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.027570 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory" (OuterVolumeSpecName: "inventory") pod "c5a28a0e-38b0-4936-be75-0b2e880b0696" (UID: "c5a28a0e-38b0-4936-be75-0b2e880b0696"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.095384 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.095769 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrjgl\" (UniqueName: \"kubernetes.io/projected/c5a28a0e-38b0-4936-be75-0b2e880b0696-kube-api-access-qrjgl\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.095790 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.095810 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5a28a0e-38b0-4936-be75-0b2e880b0696-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.322936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" event={"ID":"c5a28a0e-38b0-4936-be75-0b2e880b0696","Type":"ContainerDied","Data":"5c01c7d7394052e2c5439a7f0528fdf1301a8643fc1a252f4fa93ada9f9a289d"} Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.323011 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c01c7d7394052e2c5439a7f0528fdf1301a8643fc1a252f4fa93ada9f9a289d" Feb 19 10:25:45 crc kubenswrapper[4780]: I0219 10:25:45.323167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dbmsk" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.091281 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-pxmc7"] Feb 19 10:25:46 crc kubenswrapper[4780]: E0219 10:25:46.091982 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a28a0e-38b0-4936-be75-0b2e880b0696" containerName="run-os-openstack-openstack-cell1" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.092001 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a28a0e-38b0-4936-be75-0b2e880b0696" containerName="run-os-openstack-openstack-cell1" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.092269 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a28a0e-38b0-4936-be75-0b2e880b0696" containerName="run-os-openstack-openstack-cell1" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.093410 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.096724 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.096892 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.097110 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.097175 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.121552 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-pxmc7"] Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.137777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dw54\" (UniqueName: \"kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.138439 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.138556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.138691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.241444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dw54\" (UniqueName: \"kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.241525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.241562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.241602 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.248868 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.251750 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.260957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.263631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dw54\" (UniqueName: \"kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54\") pod \"reboot-os-openstack-openstack-cell1-pxmc7\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:46 crc kubenswrapper[4780]: I0219 10:25:46.420060 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:25:47 crc kubenswrapper[4780]: I0219 10:25:47.081038 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-pxmc7"] Feb 19 10:25:47 crc kubenswrapper[4780]: I0219 10:25:47.345902 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" event={"ID":"6924c64f-eb99-482b-ac0d-97737deb9e6c","Type":"ContainerStarted","Data":"1a18e69542a331b6eaa625fb83fa1a27c434c481c8e1d083c19bf4b58dccb4ce"} Feb 19 10:25:48 crc kubenswrapper[4780]: I0219 10:25:48.368726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" event={"ID":"6924c64f-eb99-482b-ac0d-97737deb9e6c","Type":"ContainerStarted","Data":"9cc87d7d17a30bcded1e85ea71c5e71c49321e16d3b5b683801db4c6216ce916"} Feb 19 10:25:48 crc kubenswrapper[4780]: I0219 10:25:48.395632 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" podStartSLOduration=1.906418141 podStartE2EDuration="2.395598574s" podCreationTimestamp="2026-02-19 10:25:46 +0000 UTC" firstStartedPulling="2026-02-19 10:25:47.088402229 +0000 UTC m=+7489.832059678" lastFinishedPulling="2026-02-19 10:25:47.577582662 +0000 UTC m=+7490.321240111" observedRunningTime="2026-02-19 10:25:48.388365052 +0000 UTC m=+7491.132022501" watchObservedRunningTime="2026-02-19 10:25:48.395598574 +0000 UTC m=+7491.139256023" Feb 19 10:26:05 crc kubenswrapper[4780]: I0219 10:26:05.585654 4780 generic.go:334] "Generic (PLEG): container finished" podID="6924c64f-eb99-482b-ac0d-97737deb9e6c" containerID="9cc87d7d17a30bcded1e85ea71c5e71c49321e16d3b5b683801db4c6216ce916" exitCode=0 Feb 19 10:26:05 crc kubenswrapper[4780]: I0219 10:26:05.585732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" event={"ID":"6924c64f-eb99-482b-ac0d-97737deb9e6c","Type":"ContainerDied","Data":"9cc87d7d17a30bcded1e85ea71c5e71c49321e16d3b5b683801db4c6216ce916"} Feb 19 10:26:06 crc kubenswrapper[4780]: I0219 10:26:06.336505 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:26:06 crc kubenswrapper[4780]: I0219 10:26:06.336599 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.158411 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.234132 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dw54\" (UniqueName: \"kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54\") pod \"6924c64f-eb99-482b-ac0d-97737deb9e6c\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.234569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph\") pod \"6924c64f-eb99-482b-ac0d-97737deb9e6c\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.234794 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1\") pod \"6924c64f-eb99-482b-ac0d-97737deb9e6c\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.235004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory\") pod \"6924c64f-eb99-482b-ac0d-97737deb9e6c\" (UID: \"6924c64f-eb99-482b-ac0d-97737deb9e6c\") " Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.242069 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph" (OuterVolumeSpecName: "ceph") pod "6924c64f-eb99-482b-ac0d-97737deb9e6c" (UID: "6924c64f-eb99-482b-ac0d-97737deb9e6c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.246275 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54" (OuterVolumeSpecName: "kube-api-access-7dw54") pod "6924c64f-eb99-482b-ac0d-97737deb9e6c" (UID: "6924c64f-eb99-482b-ac0d-97737deb9e6c"). InnerVolumeSpecName "kube-api-access-7dw54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.272695 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory" (OuterVolumeSpecName: "inventory") pod "6924c64f-eb99-482b-ac0d-97737deb9e6c" (UID: "6924c64f-eb99-482b-ac0d-97737deb9e6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.288642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6924c64f-eb99-482b-ac0d-97737deb9e6c" (UID: "6924c64f-eb99-482b-ac0d-97737deb9e6c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.339472 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dw54\" (UniqueName: \"kubernetes.io/projected/6924c64f-eb99-482b-ac0d-97737deb9e6c-kube-api-access-7dw54\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.339520 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.339530 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.339540 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6924c64f-eb99-482b-ac0d-97737deb9e6c-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.689218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" event={"ID":"6924c64f-eb99-482b-ac0d-97737deb9e6c","Type":"ContainerDied","Data":"1a18e69542a331b6eaa625fb83fa1a27c434c481c8e1d083c19bf4b58dccb4ce"} Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.689291 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a18e69542a331b6eaa625fb83fa1a27c434c481c8e1d083c19bf4b58dccb4ce" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.689405 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-pxmc7" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.751179 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-dlpvv"] Feb 19 10:26:07 crc kubenswrapper[4780]: E0219 10:26:07.751805 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6924c64f-eb99-482b-ac0d-97737deb9e6c" containerName="reboot-os-openstack-openstack-cell1" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.751825 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6924c64f-eb99-482b-ac0d-97737deb9e6c" containerName="reboot-os-openstack-openstack-cell1" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.752074 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6924c64f-eb99-482b-ac0d-97737deb9e6c" containerName="reboot-os-openstack-openstack-cell1" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.753141 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.756763 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.757069 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.757111 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.761616 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.781102 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-dlpvv"] Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783577 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783621 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783690 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783718 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzbn\" (UniqueName: \"kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.783758 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887890 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887933 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.887984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.888018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.888061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.888115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.888190 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzbn\" (UniqueName: \"kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.888227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.894019 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.894019 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.894578 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.895199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.895390 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.895714 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.895797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.896944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.897304 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.905020 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.910096 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:07 crc kubenswrapper[4780]: I0219 10:26:07.915542 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzbn\" (UniqueName: \"kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn\") pod \"install-certs-openstack-openstack-cell1-dlpvv\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:08 crc kubenswrapper[4780]: I0219 10:26:08.124901 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:08 crc kubenswrapper[4780]: I0219 10:26:08.979042 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-dlpvv"] Feb 19 10:26:09 crc kubenswrapper[4780]: I0219 10:26:09.722097 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" event={"ID":"76cb5bb5-2704-4830-a56c-da79652d9656","Type":"ContainerStarted","Data":"a9762a95455ccce71abec9f6bf1fdf0d07bb634da8988ff44278d5ccaa6b58df"} Feb 19 10:26:10 crc kubenswrapper[4780]: I0219 10:26:10.735490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" event={"ID":"76cb5bb5-2704-4830-a56c-da79652d9656","Type":"ContainerStarted","Data":"eb2f9b0ff7dbeda4a65b37c30bfb64b9456063a27738f0afbbbbc9202566e22b"} Feb 19 10:26:10 crc kubenswrapper[4780]: I0219 10:26:10.758250 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" podStartSLOduration=2.9918400099999998 podStartE2EDuration="3.758219899s" podCreationTimestamp="2026-02-19 10:26:07 +0000 UTC" firstStartedPulling="2026-02-19 10:26:08.967603036 +0000 UTC m=+7511.711260485" lastFinishedPulling="2026-02-19 10:26:09.733982925 +0000 UTC m=+7512.477640374" observedRunningTime="2026-02-19 10:26:10.755208619 +0000 UTC m=+7513.498866068" watchObservedRunningTime="2026-02-19 10:26:10.758219899 +0000 UTC m=+7513.501877348" Feb 19 10:26:29 crc kubenswrapper[4780]: I0219 10:26:29.988129 4780 generic.go:334] "Generic (PLEG): container finished" podID="76cb5bb5-2704-4830-a56c-da79652d9656" containerID="eb2f9b0ff7dbeda4a65b37c30bfb64b9456063a27738f0afbbbbc9202566e22b" exitCode=0 Feb 19 10:26:29 crc kubenswrapper[4780]: I0219 10:26:29.989290 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" event={"ID":"76cb5bb5-2704-4830-a56c-da79652d9656","Type":"ContainerDied","Data":"eb2f9b0ff7dbeda4a65b37c30bfb64b9456063a27738f0afbbbbc9202566e22b"} Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.480518 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.549913 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550367 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zzbn\" (UniqueName: \"kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550528 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550629 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550753 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.550939 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.551036 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.551284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle\") pod \"76cb5bb5-2704-4830-a56c-da79652d9656\" (UID: \"76cb5bb5-2704-4830-a56c-da79652d9656\") " Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.560959 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.561263 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.561332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.563362 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.563600 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.564041 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.564346 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph" (OuterVolumeSpecName: "ceph") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.568076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.569945 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.571676 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn" (OuterVolumeSpecName: "kube-api-access-5zzbn") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "kube-api-access-5zzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.591813 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.595469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory" (OuterVolumeSpecName: "inventory") pod "76cb5bb5-2704-4830-a56c-da79652d9656" (UID: "76cb5bb5-2704-4830-a56c-da79652d9656"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654508 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654549 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654564 4780 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654573 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654583 4780 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654592 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654602 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654614 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654623 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654635 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zzbn\" (UniqueName: \"kubernetes.io/projected/76cb5bb5-2704-4830-a56c-da79652d9656-kube-api-access-5zzbn\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654645 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4780]: I0219 10:26:31.654654 4780 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cb5bb5-2704-4830-a56c-da79652d9656-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.015536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" event={"ID":"76cb5bb5-2704-4830-a56c-da79652d9656","Type":"ContainerDied","Data":"a9762a95455ccce71abec9f6bf1fdf0d07bb634da8988ff44278d5ccaa6b58df"} Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.015627 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9762a95455ccce71abec9f6bf1fdf0d07bb634da8988ff44278d5ccaa6b58df" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.015788 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-dlpvv" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.181461 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-nczgk"] Feb 19 10:26:32 crc kubenswrapper[4780]: E0219 10:26:32.182255 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76cb5bb5-2704-4830-a56c-da79652d9656" containerName="install-certs-openstack-openstack-cell1" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.182278 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="76cb5bb5-2704-4830-a56c-da79652d9656" containerName="install-certs-openstack-openstack-cell1" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.182567 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="76cb5bb5-2704-4830-a56c-da79652d9656" containerName="install-certs-openstack-openstack-cell1" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.184829 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.188159 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.188496 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.193957 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.196203 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.196980 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-nczgk"] Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.274643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.275008 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.275089 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh998\" (UniqueName: \"kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.275472 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.378135 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.378208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh998\" (UniqueName: \"kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.378280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.378348 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.388349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.388539 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.389750 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.402774 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh998\" (UniqueName: \"kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998\") pod \"ceph-client-openstack-openstack-cell1-nczgk\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:32 crc kubenswrapper[4780]: I0219 10:26:32.522316 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:33 crc kubenswrapper[4780]: I0219 10:26:33.116251 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-nczgk"] Feb 19 10:26:34 crc kubenswrapper[4780]: I0219 10:26:34.048953 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" event={"ID":"aa96424d-d698-4a79-a271-8150de092abf","Type":"ContainerStarted","Data":"890d347ddc43a5fd6f783d874003b6a9a73b4e764d71e4995453b892e725dc42"} Feb 19 10:26:34 crc kubenswrapper[4780]: I0219 10:26:34.049662 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" event={"ID":"aa96424d-d698-4a79-a271-8150de092abf","Type":"ContainerStarted","Data":"2b953059d6f6b22eafce84e0f8f0416f1e6ca23d12d8bba09635097d3ef7d064"} Feb 19 10:26:34 crc kubenswrapper[4780]: I0219 10:26:34.070257 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" podStartSLOduration=1.496316975 podStartE2EDuration="2.07023162s" podCreationTimestamp="2026-02-19 10:26:32 +0000 UTC" firstStartedPulling="2026-02-19 10:26:33.129965249 +0000 UTC m=+7535.873622698" lastFinishedPulling="2026-02-19 10:26:33.703879894 +0000 UTC m=+7536.447537343" observedRunningTime="2026-02-19 10:26:34.065949734 +0000 UTC m=+7536.809607183" watchObservedRunningTime="2026-02-19 10:26:34.07023162 +0000 UTC m=+7536.813889069" Feb 19 10:26:36 crc kubenswrapper[4780]: I0219 10:26:36.336536 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:26:36 crc kubenswrapper[4780]: I0219 10:26:36.337193 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:26:40 crc kubenswrapper[4780]: I0219 10:26:40.121192 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa96424d-d698-4a79-a271-8150de092abf" containerID="890d347ddc43a5fd6f783d874003b6a9a73b4e764d71e4995453b892e725dc42" exitCode=0 Feb 19 10:26:40 crc kubenswrapper[4780]: I0219 10:26:40.121327 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" event={"ID":"aa96424d-d698-4a79-a271-8150de092abf","Type":"ContainerDied","Data":"890d347ddc43a5fd6f783d874003b6a9a73b4e764d71e4995453b892e725dc42"} Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.731449 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.749469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh998\" (UniqueName: \"kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998\") pod \"aa96424d-d698-4a79-a271-8150de092abf\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.749834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory\") pod \"aa96424d-d698-4a79-a271-8150de092abf\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.750047 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph\") pod \"aa96424d-d698-4a79-a271-8150de092abf\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.750207 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1\") pod \"aa96424d-d698-4a79-a271-8150de092abf\" (UID: \"aa96424d-d698-4a79-a271-8150de092abf\") " Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.758554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph" (OuterVolumeSpecName: "ceph") pod "aa96424d-d698-4a79-a271-8150de092abf" (UID: "aa96424d-d698-4a79-a271-8150de092abf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.764333 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998" (OuterVolumeSpecName: "kube-api-access-sh998") pod "aa96424d-d698-4a79-a271-8150de092abf" (UID: "aa96424d-d698-4a79-a271-8150de092abf"). InnerVolumeSpecName "kube-api-access-sh998". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.785636 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory" (OuterVolumeSpecName: "inventory") pod "aa96424d-d698-4a79-a271-8150de092abf" (UID: "aa96424d-d698-4a79-a271-8150de092abf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.786421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "aa96424d-d698-4a79-a271-8150de092abf" (UID: "aa96424d-d698-4a79-a271-8150de092abf"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.853668 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.853726 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.853737 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa96424d-d698-4a79-a271-8150de092abf-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:41 crc kubenswrapper[4780]: I0219 10:26:41.853749 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh998\" (UniqueName: \"kubernetes.io/projected/aa96424d-d698-4a79-a271-8150de092abf-kube-api-access-sh998\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.150050 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" event={"ID":"aa96424d-d698-4a79-a271-8150de092abf","Type":"ContainerDied","Data":"2b953059d6f6b22eafce84e0f8f0416f1e6ca23d12d8bba09635097d3ef7d064"} Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.150153 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b953059d6f6b22eafce84e0f8f0416f1e6ca23d12d8bba09635097d3ef7d064" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.150171 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-nczgk" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.266244 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-5czgg"] Feb 19 10:26:42 crc kubenswrapper[4780]: E0219 10:26:42.266997 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa96424d-d698-4a79-a271-8150de092abf" containerName="ceph-client-openstack-openstack-cell1" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.267024 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa96424d-d698-4a79-a271-8150de092abf" containerName="ceph-client-openstack-openstack-cell1" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.267336 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa96424d-d698-4a79-a271-8150de092abf" containerName="ceph-client-openstack-openstack-cell1" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.268360 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.273196 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.273940 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.274111 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.275470 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.275671 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.291847 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-5czgg"] Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.372759 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.372956 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.373008 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.373211 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbrks\" (UniqueName: \"kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.373256 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.373285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476566 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbrks\" (UniqueName: \"kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476635 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476698 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.476785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.477771 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.481413 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.481680 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.483249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.488834 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.495410 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbrks\" (UniqueName: \"kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks\") pod \"ovn-openstack-openstack-cell1-5czgg\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:42 crc kubenswrapper[4780]: I0219 10:26:42.587431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:26:43 crc kubenswrapper[4780]: I0219 10:26:43.238472 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-5czgg"] Feb 19 10:26:44 crc kubenswrapper[4780]: I0219 10:26:44.178357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-5czgg" event={"ID":"4813ef2e-bae5-43f2-b914-a755f4cac0ad","Type":"ContainerStarted","Data":"c27f7534dcd3d03d08b53a5f4c69d488289a20366a8e72568fdc4e6006669754"} Feb 19 10:26:44 crc kubenswrapper[4780]: I0219 10:26:44.178725 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-5czgg" event={"ID":"4813ef2e-bae5-43f2-b914-a755f4cac0ad","Type":"ContainerStarted","Data":"7db68bdb3ea4fc5b27f5cdfe1ef5d902ff3e7bf50e6d7f0ed02a6e5c0890e4be"} Feb 19 10:26:44 crc kubenswrapper[4780]: I0219 10:26:44.209076 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-5czgg" podStartSLOduration=1.759631922 podStartE2EDuration="2.209042822s" podCreationTimestamp="2026-02-19 10:26:42 +0000 UTC" firstStartedPulling="2026-02-19 10:26:43.194465825 +0000 UTC m=+7545.938123274" lastFinishedPulling="2026-02-19 10:26:43.643876715 +0000 UTC m=+7546.387534174" observedRunningTime="2026-02-19 10:26:44.201407623 +0000 UTC m=+7546.945065072" watchObservedRunningTime="2026-02-19 10:26:44.209042822 +0000 UTC m=+7546.952700271" Feb 19 10:27:06 crc kubenswrapper[4780]: I0219 10:27:06.336699 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:27:06 crc kubenswrapper[4780]: I0219 10:27:06.337504 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:27:06 crc kubenswrapper[4780]: I0219 10:27:06.337563 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:27:06 crc kubenswrapper[4780]: I0219 10:27:06.339040 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:27:06 crc kubenswrapper[4780]: I0219 10:27:06.339139 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce" gracePeriod=600 Feb 19 10:27:07 crc kubenswrapper[4780]: I0219 10:27:07.447458 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce" exitCode=0 Feb 19 10:27:07 crc kubenswrapper[4780]: I0219 10:27:07.448203 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce"} Feb 19 10:27:07 crc kubenswrapper[4780]: I0219 10:27:07.448239 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89"} Feb 19 10:27:07 crc kubenswrapper[4780]: I0219 10:27:07.448258 4780 scope.go:117] "RemoveContainer" containerID="de4ec122c4ed40b93ea4932974c2bd0e10dbbe4454953e6f73540d160a8fc64c" Feb 19 10:27:51 crc kubenswrapper[4780]: I0219 10:27:51.965808 4780 generic.go:334] "Generic (PLEG): container finished" podID="4813ef2e-bae5-43f2-b914-a755f4cac0ad" containerID="c27f7534dcd3d03d08b53a5f4c69d488289a20366a8e72568fdc4e6006669754" exitCode=0 Feb 19 10:27:51 crc kubenswrapper[4780]: I0219 10:27:51.966535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-5czgg" event={"ID":"4813ef2e-bae5-43f2-b914-a755f4cac0ad","Type":"ContainerDied","Data":"c27f7534dcd3d03d08b53a5f4c69d488289a20366a8e72568fdc4e6006669754"} Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.732405 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.875370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.875792 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbrks\" (UniqueName: \"kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.875843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.875886 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.875937 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.876001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0\") pod \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\" (UID: \"4813ef2e-bae5-43f2-b914-a755f4cac0ad\") " Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.884016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks" (OuterVolumeSpecName: "kube-api-access-fbrks") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "kube-api-access-fbrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.886330 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph" (OuterVolumeSpecName: "ceph") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.888497 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.912101 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.913844 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory" (OuterVolumeSpecName: "inventory") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.916023 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4813ef2e-bae5-43f2-b914-a755f4cac0ad" (UID: "4813ef2e-bae5-43f2-b914-a755f4cac0ad"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979411 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbrks\" (UniqueName: \"kubernetes.io/projected/4813ef2e-bae5-43f2-b914-a755f4cac0ad-kube-api-access-fbrks\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979463 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979475 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979485 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979496 4780 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:53 crc kubenswrapper[4780]: I0219 10:27:53.979506 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4813ef2e-bae5-43f2-b914-a755f4cac0ad-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.052763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-5czgg" event={"ID":"4813ef2e-bae5-43f2-b914-a755f4cac0ad","Type":"ContainerDied","Data":"7db68bdb3ea4fc5b27f5cdfe1ef5d902ff3e7bf50e6d7f0ed02a6e5c0890e4be"} Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.052838 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db68bdb3ea4fc5b27f5cdfe1ef5d902ff3e7bf50e6d7f0ed02a6e5c0890e4be" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.053000 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-5czgg" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.130445 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wjpcn"] Feb 19 10:27:54 crc kubenswrapper[4780]: E0219 10:27:54.131052 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4813ef2e-bae5-43f2-b914-a755f4cac0ad" containerName="ovn-openstack-openstack-cell1" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.131072 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4813ef2e-bae5-43f2-b914-a755f4cac0ad" containerName="ovn-openstack-openstack-cell1" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.131324 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4813ef2e-bae5-43f2-b914-a755f4cac0ad" containerName="ovn-openstack-openstack-cell1" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.132306 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.136629 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.136810 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.136945 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.137146 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.137180 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.144196 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.157989 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wjpcn"] Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.184840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.184932 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4mz\" (UniqueName: \"kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.185026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.185070 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.185162 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.185246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.185332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.287757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj4mz\" (UniqueName: \"kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.287886 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.287925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.287996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.288068 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.288162 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.288250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.294333 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.294584 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.298470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.299103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.302653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.303871 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.308346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj4mz\" (UniqueName: \"kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz\") pod \"neutron-metadata-openstack-openstack-cell1-wjpcn\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:54 crc kubenswrapper[4780]: I0219 10:27:54.453274 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:27:55 crc kubenswrapper[4780]: W0219 10:27:55.121491 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24198e28_4159_44b0_ac69_e42faa76272a.slice/crio-c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132 WatchSource:0}: Error finding container c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132: Status 404 returned error can't find the container with id c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132 Feb 19 10:27:55 crc kubenswrapper[4780]: I0219 10:27:55.129645 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wjpcn"] Feb 19 10:27:56 crc kubenswrapper[4780]: I0219 10:27:56.098169 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" event={"ID":"24198e28-4159-44b0-ac69-e42faa76272a","Type":"ContainerStarted","Data":"c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132"} Feb 19 10:27:57 crc kubenswrapper[4780]: I0219 10:27:57.110908 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" event={"ID":"24198e28-4159-44b0-ac69-e42faa76272a","Type":"ContainerStarted","Data":"609a23dcc5eea980149f5734afbf335065ee39cf3a47210f032122957ec5c1b4"} Feb 19 10:27:57 crc kubenswrapper[4780]: I0219 10:27:57.145520 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" podStartSLOduration=2.427279892 podStartE2EDuration="3.145494584s" podCreationTimestamp="2026-02-19 10:27:54 +0000 UTC" firstStartedPulling="2026-02-19 10:27:55.124969154 +0000 UTC m=+7617.868626603" lastFinishedPulling="2026-02-19 10:27:55.843183846 +0000 UTC m=+7618.586841295" observedRunningTime="2026-02-19 10:27:57.138719836 +0000 UTC m=+7619.882377285" watchObservedRunningTime="2026-02-19 10:27:57.145494584 +0000 UTC m=+7619.889152033" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.335787 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.347354 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.349679 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.445205 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.445836 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2w2\" (UniqueName: \"kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.446014 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.549152 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2w2\" (UniqueName: \"kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.549254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.549333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.550112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.550169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.573996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2w2\" (UniqueName: \"kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2\") pod \"community-operators-wphtb\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:28 crc kubenswrapper[4780]: I0219 10:28:28.675469 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:29 crc kubenswrapper[4780]: I0219 10:28:29.277579 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:29 crc kubenswrapper[4780]: I0219 10:28:29.480536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerStarted","Data":"706df407536b991a14d5a1eeeb5c576089da9cf8b7e547308df28b62e553ea10"} Feb 19 10:28:30 crc kubenswrapper[4780]: I0219 10:28:30.497539 4780 generic.go:334] "Generic (PLEG): container finished" podID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerID="60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1" exitCode=0 Feb 19 10:28:30 crc kubenswrapper[4780]: I0219 10:28:30.499250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerDied","Data":"60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1"} Feb 19 10:28:31 crc kubenswrapper[4780]: I0219 10:28:31.513361 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerStarted","Data":"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418"} Feb 19 10:28:33 crc kubenswrapper[4780]: I0219 10:28:33.537383 4780 generic.go:334] "Generic (PLEG): container finished" podID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerID="ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418" exitCode=0 Feb 19 10:28:33 crc kubenswrapper[4780]: I0219 10:28:33.537457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerDied","Data":"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418"} Feb 19 10:28:34 crc kubenswrapper[4780]: I0219 10:28:34.551447 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerStarted","Data":"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1"} Feb 19 10:28:34 crc kubenswrapper[4780]: I0219 10:28:34.574852 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wphtb" podStartSLOduration=2.983628488 podStartE2EDuration="6.574828538s" podCreationTimestamp="2026-02-19 10:28:28 +0000 UTC" firstStartedPulling="2026-02-19 10:28:30.504980591 +0000 UTC m=+7653.248638040" lastFinishedPulling="2026-02-19 10:28:34.096180641 +0000 UTC m=+7656.839838090" observedRunningTime="2026-02-19 10:28:34.571783592 +0000 UTC m=+7657.315441051" watchObservedRunningTime="2026-02-19 10:28:34.574828538 +0000 UTC m=+7657.318485987" Feb 19 10:28:38 crc kubenswrapper[4780]: I0219 10:28:38.675797 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:38 crc kubenswrapper[4780]: I0219 10:28:38.676634 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:38 crc kubenswrapper[4780]: I0219 10:28:38.726767 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:39 crc kubenswrapper[4780]: I0219 10:28:39.676912 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:39 crc kubenswrapper[4780]: I0219 10:28:39.765477 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:41 crc kubenswrapper[4780]: I0219 10:28:41.642235 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wphtb" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="registry-server" containerID="cri-o://5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1" gracePeriod=2 Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.254053 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.263485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content\") pod \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.263752 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2w2\" (UniqueName: \"kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2\") pod \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.264000 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities\") pod \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\" (UID: \"fa85af0f-d793-43b6-b32b-73168cbb4ed4\") " Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.264911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities" (OuterVolumeSpecName: "utilities") pod "fa85af0f-d793-43b6-b32b-73168cbb4ed4" (UID: "fa85af0f-d793-43b6-b32b-73168cbb4ed4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.274871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2" (OuterVolumeSpecName: "kube-api-access-sz2w2") pod "fa85af0f-d793-43b6-b32b-73168cbb4ed4" (UID: "fa85af0f-d793-43b6-b32b-73168cbb4ed4"). InnerVolumeSpecName "kube-api-access-sz2w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.328494 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa85af0f-d793-43b6-b32b-73168cbb4ed4" (UID: "fa85af0f-d793-43b6-b32b-73168cbb4ed4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.367411 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.367465 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85af0f-d793-43b6-b32b-73168cbb4ed4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.367482 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2w2\" (UniqueName: \"kubernetes.io/projected/fa85af0f-d793-43b6-b32b-73168cbb4ed4-kube-api-access-sz2w2\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.657638 4780 generic.go:334] "Generic (PLEG): container finished" podID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerID="5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1" exitCode=0 Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.657713 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerDied","Data":"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1"} Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.657747 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wphtb" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.657773 4780 scope.go:117] "RemoveContainer" containerID="5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.657757 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wphtb" event={"ID":"fa85af0f-d793-43b6-b32b-73168cbb4ed4","Type":"ContainerDied","Data":"706df407536b991a14d5a1eeeb5c576089da9cf8b7e547308df28b62e553ea10"} Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.695028 4780 scope.go:117] "RemoveContainer" containerID="ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.719327 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.731112 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wphtb"] Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.740799 4780 scope.go:117] "RemoveContainer" containerID="60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.793525 4780 scope.go:117] "RemoveContainer" containerID="5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1" Feb 19 10:28:42 crc kubenswrapper[4780]: E0219 10:28:42.794393 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1\": container with ID starting with 5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1 not found: ID does not exist" containerID="5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.794448 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1"} err="failed to get container status \"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1\": rpc error: code = NotFound desc = could not find container \"5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1\": container with ID starting with 5ad87a4abc3cfb6e6720ca06b387bbf82e511c862b1ad8952b3d7a07f2e35ed1 not found: ID does not exist" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.794481 4780 scope.go:117] "RemoveContainer" containerID="ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418" Feb 19 10:28:42 crc kubenswrapper[4780]: E0219 10:28:42.794969 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418\": container with ID starting with ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418 not found: ID does not exist" containerID="ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.795102 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418"} err="failed to get container status \"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418\": rpc error: code = NotFound desc = could not find container \"ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418\": container with ID starting with ceb50bc777bf3ab6361be382e9f710f8f07fedc7a41bc49c644e87a575a3e418 not found: ID does not exist" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.795216 4780 scope.go:117] "RemoveContainer" containerID="60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1" Feb 19 10:28:42 crc kubenswrapper[4780]: E0219 10:28:42.795890 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1\": container with ID starting with 60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1 not found: ID does not exist" containerID="60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1" Feb 19 10:28:42 crc kubenswrapper[4780]: I0219 10:28:42.796004 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1"} err="failed to get container status \"60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1\": rpc error: code = NotFound desc = could not find container \"60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1\": container with ID starting with 60950a7bb43a123554c013b164e35ebfba4646a6f6d03415faa173eb469ae5e1 not found: ID does not exist" Feb 19 10:28:43 crc kubenswrapper[4780]: I0219 10:28:43.957489 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" path="/var/lib/kubelet/pods/fa85af0f-d793-43b6-b32b-73168cbb4ed4/volumes" Feb 19 10:28:47 crc kubenswrapper[4780]: E0219 10:28:47.400916 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24198e28_4159_44b0_ac69_e42faa76272a.slice/crio-609a23dcc5eea980149f5734afbf335065ee39cf3a47210f032122957ec5c1b4.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:28:47 crc kubenswrapper[4780]: I0219 10:28:47.724277 4780 generic.go:334] "Generic (PLEG): container finished" podID="24198e28-4159-44b0-ac69-e42faa76272a" containerID="609a23dcc5eea980149f5734afbf335065ee39cf3a47210f032122957ec5c1b4" exitCode=0 Feb 19 10:28:47 crc kubenswrapper[4780]: I0219 10:28:47.724364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" event={"ID":"24198e28-4159-44b0-ac69-e42faa76272a","Type":"ContainerDied","Data":"609a23dcc5eea980149f5734afbf335065ee39cf3a47210f032122957ec5c1b4"} Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.289582 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.385630 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386239 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj4mz\" (UniqueName: \"kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386418 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386505 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386639 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.386709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"24198e28-4159-44b0-ac69-e42faa76272a\" (UID: \"24198e28-4159-44b0-ac69-e42faa76272a\") " Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.402801 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph" (OuterVolumeSpecName: "ceph") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.402990 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.404456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz" (OuterVolumeSpecName: "kube-api-access-qj4mz") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "kube-api-access-qj4mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.422633 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.426653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.428107 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.431189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory" (OuterVolumeSpecName: "inventory") pod "24198e28-4159-44b0-ac69-e42faa76272a" (UID: "24198e28-4159-44b0-ac69-e42faa76272a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492560 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492617 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj4mz\" (UniqueName: \"kubernetes.io/projected/24198e28-4159-44b0-ac69-e42faa76272a-kube-api-access-qj4mz\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492627 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492640 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492652 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492665 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.492678 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/24198e28-4159-44b0-ac69-e42faa76272a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.750679 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" event={"ID":"24198e28-4159-44b0-ac69-e42faa76272a","Type":"ContainerDied","Data":"c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132"} Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.750738 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ca22a277273110ca3fbed4a076089727df1b7a43f5c1f58bb7b325830f7132" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.750816 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wjpcn" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.866781 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-crkmk"] Feb 19 10:28:49 crc kubenswrapper[4780]: E0219 10:28:49.867622 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="extract-content" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.867651 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="extract-content" Feb 19 10:28:49 crc kubenswrapper[4780]: E0219 10:28:49.867670 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="registry-server" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.867678 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="registry-server" Feb 19 10:28:49 crc kubenswrapper[4780]: E0219 10:28:49.867702 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="extract-utilities" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.867713 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="extract-utilities" Feb 19 10:28:49 crc kubenswrapper[4780]: E0219 10:28:49.867753 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24198e28-4159-44b0-ac69-e42faa76272a" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.867766 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="24198e28-4159-44b0-ac69-e42faa76272a" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.868062 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85af0f-d793-43b6-b32b-73168cbb4ed4" containerName="registry-server" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.868093 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="24198e28-4159-44b0-ac69-e42faa76272a" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.869279 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.873043 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.873507 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.873742 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.874856 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.876010 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:28:49 crc kubenswrapper[4780]: I0219 10:28:49.883298 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-crkmk"] Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.010908 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.011517 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbq6\" (UniqueName: \"kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.011809 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.012064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.012256 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.012292 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115684 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbq6\" (UniqueName: \"kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115741 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115773 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115834 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.115857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.122589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.122618 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.122757 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.123183 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.124017 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.138633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lbq6\" (UniqueName: \"kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6\") pod \"libvirt-openstack-openstack-cell1-crkmk\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.191640 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:28:50 crc kubenswrapper[4780]: I0219 10:28:50.792014 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-crkmk"] Feb 19 10:28:51 crc kubenswrapper[4780]: I0219 10:28:51.787033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" event={"ID":"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de","Type":"ContainerStarted","Data":"18eef1c0c0642e1d7d418e09e7f6f5c09f25baa27d4ac989c9a220f525d9ffb9"} Feb 19 10:28:51 crc kubenswrapper[4780]: I0219 10:28:51.788701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" event={"ID":"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de","Type":"ContainerStarted","Data":"b7abfebb8501522dc576ca90d6b0baba0b07451843f07539adc32922f22d7c89"} Feb 19 10:28:51 crc kubenswrapper[4780]: I0219 10:28:51.825435 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" podStartSLOduration=2.215381472 podStartE2EDuration="2.825406975s" podCreationTimestamp="2026-02-19 10:28:49 +0000 UTC" firstStartedPulling="2026-02-19 10:28:50.794515671 +0000 UTC m=+7673.538173120" lastFinishedPulling="2026-02-19 10:28:51.404541164 +0000 UTC m=+7674.148198623" observedRunningTime="2026-02-19 10:28:51.811713754 +0000 UTC m=+7674.555371203" watchObservedRunningTime="2026-02-19 10:28:51.825406975 +0000 UTC m=+7674.569064434" Feb 19 10:29:06 crc kubenswrapper[4780]: I0219 10:29:06.338872 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:29:06 crc kubenswrapper[4780]: I0219 10:29:06.339613 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.415602 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lscsw"] Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.419277 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.431657 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lscsw"] Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.569026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddtz\" (UniqueName: \"kubernetes.io/projected/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-kube-api-access-2ddtz\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.569697 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-catalog-content\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.569894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-utilities\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.671464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddtz\" (UniqueName: \"kubernetes.io/projected/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-kube-api-access-2ddtz\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.671577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-catalog-content\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.671612 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-utilities\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.672164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-utilities\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.672464 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-catalog-content\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.704961 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddtz\" (UniqueName: \"kubernetes.io/projected/0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b-kube-api-access-2ddtz\") pod \"certified-operators-lscsw\" (UID: \"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b\") " pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:15 crc kubenswrapper[4780]: I0219 10:29:15.759232 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:16 crc kubenswrapper[4780]: I0219 10:29:16.367847 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lscsw"] Feb 19 10:29:17 crc kubenswrapper[4780]: I0219 10:29:17.161521 4780 generic.go:334] "Generic (PLEG): container finished" podID="0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b" containerID="be81a84eb24bd5d6e15940a6a760f4ce8c65421dfb10c6f919488d1a9f5a9e41" exitCode=0 Feb 19 10:29:17 crc kubenswrapper[4780]: I0219 10:29:17.161643 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lscsw" event={"ID":"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b","Type":"ContainerDied","Data":"be81a84eb24bd5d6e15940a6a760f4ce8c65421dfb10c6f919488d1a9f5a9e41"} Feb 19 10:29:17 crc kubenswrapper[4780]: I0219 10:29:17.162038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lscsw" event={"ID":"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b","Type":"ContainerStarted","Data":"12165cc3cad8c8cec2bc6a57c344d52a0e4b19b3ec5ddd082ee0b9ef1a48fbe2"} Feb 19 10:29:23 crc kubenswrapper[4780]: I0219 10:29:23.232353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lscsw" event={"ID":"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b","Type":"ContainerStarted","Data":"489354f16c94eda3fc14c025a582fa852aa90e9835742bc6b8b7c55beb6ba7db"} Feb 19 10:29:24 crc kubenswrapper[4780]: I0219 10:29:24.245752 4780 generic.go:334] "Generic (PLEG): container finished" podID="0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b" containerID="489354f16c94eda3fc14c025a582fa852aa90e9835742bc6b8b7c55beb6ba7db" exitCode=0 Feb 19 10:29:24 crc kubenswrapper[4780]: I0219 10:29:24.245815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lscsw" event={"ID":"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b","Type":"ContainerDied","Data":"489354f16c94eda3fc14c025a582fa852aa90e9835742bc6b8b7c55beb6ba7db"} Feb 19 10:29:26 crc kubenswrapper[4780]: I0219 10:29:26.275972 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lscsw" event={"ID":"0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b","Type":"ContainerStarted","Data":"016dd2f71a542138d80fb53784772ceae50952ec9e3ab18878b3c0482691aebf"} Feb 19 10:29:26 crc kubenswrapper[4780]: I0219 10:29:26.312183 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lscsw" podStartSLOduration=3.385708066 podStartE2EDuration="11.312143574s" podCreationTimestamp="2026-02-19 10:29:15 +0000 UTC" firstStartedPulling="2026-02-19 10:29:17.164817522 +0000 UTC m=+7699.908474961" lastFinishedPulling="2026-02-19 10:29:25.09125302 +0000 UTC m=+7707.834910469" observedRunningTime="2026-02-19 10:29:26.302201788 +0000 UTC m=+7709.045859257" watchObservedRunningTime="2026-02-19 10:29:26.312143574 +0000 UTC m=+7709.055801033" Feb 19 10:29:35 crc kubenswrapper[4780]: I0219 10:29:35.759534 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:35 crc kubenswrapper[4780]: I0219 10:29:35.760522 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:35 crc kubenswrapper[4780]: I0219 10:29:35.837481 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.336767 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.336831 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.448961 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lscsw" Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.625011 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lscsw"] Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.668742 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 10:29:36 crc kubenswrapper[4780]: I0219 10:29:36.669067 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4pmx" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="registry-server" containerID="cri-o://19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0" gracePeriod=2 Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.287965 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.344672 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content\") pod \"ad7720a3-e835-4a18-adb2-4591c2db322b\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.344851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k65pj\" (UniqueName: \"kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj\") pod \"ad7720a3-e835-4a18-adb2-4591c2db322b\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.344935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities\") pod \"ad7720a3-e835-4a18-adb2-4591c2db322b\" (UID: \"ad7720a3-e835-4a18-adb2-4591c2db322b\") " Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.345911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities" (OuterVolumeSpecName: "utilities") pod "ad7720a3-e835-4a18-adb2-4591c2db322b" (UID: "ad7720a3-e835-4a18-adb2-4591c2db322b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.349573 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.354499 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj" (OuterVolumeSpecName: "kube-api-access-k65pj") pod "ad7720a3-e835-4a18-adb2-4591c2db322b" (UID: "ad7720a3-e835-4a18-adb2-4591c2db322b"). InnerVolumeSpecName "kube-api-access-k65pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.410200 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerID="19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0" exitCode=0 Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.411418 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4pmx" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.411684 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerDied","Data":"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0"} Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.412588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4pmx" event={"ID":"ad7720a3-e835-4a18-adb2-4591c2db322b","Type":"ContainerDied","Data":"88918b73b72f067780d32774b7229928fe09241e236fce13cb9e625344af7e5f"} Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.412709 4780 scope.go:117] "RemoveContainer" containerID="19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.450032 4780 scope.go:117] "RemoveContainer" containerID="df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.452504 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k65pj\" (UniqueName: \"kubernetes.io/projected/ad7720a3-e835-4a18-adb2-4591c2db322b-kube-api-access-k65pj\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.461853 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7720a3-e835-4a18-adb2-4591c2db322b" (UID: "ad7720a3-e835-4a18-adb2-4591c2db322b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.520429 4780 scope.go:117] "RemoveContainer" containerID="a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.557565 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7720a3-e835-4a18-adb2-4591c2db322b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.563162 4780 scope.go:117] "RemoveContainer" containerID="19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0" Feb 19 10:29:37 crc kubenswrapper[4780]: E0219 10:29:37.565831 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0\": container with ID starting with 19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0 not found: ID does not exist" containerID="19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.565907 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0"} err="failed to get container status \"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0\": rpc error: code = NotFound desc = could not find container \"19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0\": container with ID starting with 19f9eb60be232473dfc1106909aaa02dac9e4a3934f2d8cddda503eefbbb48f0 not found: ID does not exist" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.565949 4780 scope.go:117] "RemoveContainer" containerID="df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85" Feb 19 10:29:37 crc kubenswrapper[4780]: E0219 10:29:37.566658 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85\": container with ID starting with df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85 not found: ID does not exist" containerID="df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.566712 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85"} err="failed to get container status \"df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85\": rpc error: code = NotFound desc = could not find container \"df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85\": container with ID starting with df0e9b67f22deab5210bbc98504b2b850f876e1366505d6946c953de71d00f85 not found: ID does not exist" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.566893 4780 scope.go:117] "RemoveContainer" containerID="a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1" Feb 19 10:29:37 crc kubenswrapper[4780]: E0219 10:29:37.568106 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1\": container with ID starting with a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1 not found: ID does not exist" containerID="a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.568193 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1"} err="failed to get container status \"a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1\": rpc error: code = NotFound desc = could not find container \"a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1\": container with ID starting with a320ad3776a9a5451896ff6724b8fb97649ba76077b7a48d4563579aab7a49a1 not found: ID does not exist" Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.752197 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.767334 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4pmx"] Feb 19 10:29:37 crc kubenswrapper[4780]: I0219 10:29:37.959508 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" path="/var/lib/kubelet/pods/ad7720a3-e835-4a18-adb2-4591c2db322b/volumes" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.177947 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz"] Feb 19 10:30:00 crc kubenswrapper[4780]: E0219 10:30:00.180794 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.180913 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4780]: E0219 10:30:00.180987 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="extract-utilities" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.181043 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="extract-utilities" Feb 19 10:30:00 crc kubenswrapper[4780]: E0219 10:30:00.181163 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="extract-content" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.181255 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="extract-content" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.181607 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7720a3-e835-4a18-adb2-4591c2db322b" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.182953 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.185448 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.185526 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.203621 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz"] Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.271961 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.272040 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l66n\" (UniqueName: \"kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.272100 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.374395 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.374675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.374732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l66n\" (UniqueName: \"kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.375811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.382856 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.395150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l66n\" (UniqueName: \"kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n\") pod \"collect-profiles-29524950-948lz\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:00 crc kubenswrapper[4780]: I0219 10:30:00.517553 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:01 crc kubenswrapper[4780]: I0219 10:30:01.042392 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz"] Feb 19 10:30:01 crc kubenswrapper[4780]: I0219 10:30:01.838231 4780 generic.go:334] "Generic (PLEG): container finished" podID="1a41d8bd-2e1f-4440-997d-bdad1f27cc52" containerID="a862ad6315c61901edaaa4173c7c931244a58258e871c33ac29d398c49ccb2a0" exitCode=0 Feb 19 10:30:01 crc kubenswrapper[4780]: I0219 10:30:01.838356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" event={"ID":"1a41d8bd-2e1f-4440-997d-bdad1f27cc52","Type":"ContainerDied","Data":"a862ad6315c61901edaaa4173c7c931244a58258e871c33ac29d398c49ccb2a0"} Feb 19 10:30:01 crc kubenswrapper[4780]: I0219 10:30:01.838690 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" event={"ID":"1a41d8bd-2e1f-4440-997d-bdad1f27cc52","Type":"ContainerStarted","Data":"80e91a0fa0b667a7c5b684aa90b7022df7b7103a7d8d941bff89edb8271fcbf6"} Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.298887 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.389527 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume\") pod \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.389851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume\") pod \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.389892 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l66n\" (UniqueName: \"kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n\") pod \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\" (UID: \"1a41d8bd-2e1f-4440-997d-bdad1f27cc52\") " Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.397156 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a41d8bd-2e1f-4440-997d-bdad1f27cc52" (UID: "1a41d8bd-2e1f-4440-997d-bdad1f27cc52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.397302 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a41d8bd-2e1f-4440-997d-bdad1f27cc52" (UID: "1a41d8bd-2e1f-4440-997d-bdad1f27cc52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.412067 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n" (OuterVolumeSpecName: "kube-api-access-8l66n") pod "1a41d8bd-2e1f-4440-997d-bdad1f27cc52" (UID: "1a41d8bd-2e1f-4440-997d-bdad1f27cc52"). InnerVolumeSpecName "kube-api-access-8l66n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.493363 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.493412 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l66n\" (UniqueName: \"kubernetes.io/projected/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-kube-api-access-8l66n\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.493424 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a41d8bd-2e1f-4440-997d-bdad1f27cc52-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.863207 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.863066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-948lz" event={"ID":"1a41d8bd-2e1f-4440-997d-bdad1f27cc52","Type":"ContainerDied","Data":"80e91a0fa0b667a7c5b684aa90b7022df7b7103a7d8d941bff89edb8271fcbf6"} Feb 19 10:30:03 crc kubenswrapper[4780]: I0219 10:30:03.872248 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e91a0fa0b667a7c5b684aa90b7022df7b7103a7d8d941bff89edb8271fcbf6" Feb 19 10:30:04 crc kubenswrapper[4780]: I0219 10:30:04.389557 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd"] Feb 19 10:30:04 crc kubenswrapper[4780]: I0219 10:30:04.402236 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-8t4kd"] Feb 19 10:30:05 crc kubenswrapper[4780]: I0219 10:30:05.952143 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac30c9c9-f698-4b4a-844c-34d5e9e13f80" path="/var/lib/kubelet/pods/ac30c9c9-f698-4b4a-844c-34d5e9e13f80/volumes" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.336452 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.336529 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.336589 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.337864 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.337956 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" gracePeriod=600 Feb 19 10:30:06 crc kubenswrapper[4780]: E0219 10:30:06.474745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.901342 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" exitCode=0 Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.901386 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89"} Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.901486 4780 scope.go:117] "RemoveContainer" containerID="fd366f88b2f1c53058fb51f81ce5e379d833f904e3e651a4b1c101a251bd85ce" Feb 19 10:30:06 crc kubenswrapper[4780]: I0219 10:30:06.902529 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:30:06 crc kubenswrapper[4780]: E0219 10:30:06.902890 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:30:17 crc kubenswrapper[4780]: I0219 10:30:17.354648 4780 scope.go:117] "RemoveContainer" containerID="47ab9826a3da9a7543e2c13731aea3d69002817a516873c0f8058beddbed1979" Feb 19 10:30:21 crc kubenswrapper[4780]: I0219 10:30:21.939706 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:30:21 crc kubenswrapper[4780]: E0219 10:30:21.941056 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:30:33 crc kubenswrapper[4780]: I0219 10:30:33.939230 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:30:33 crc kubenswrapper[4780]: E0219 10:30:33.940504 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:30:47 crc kubenswrapper[4780]: I0219 10:30:47.946796 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:30:47 crc kubenswrapper[4780]: E0219 10:30:47.947950 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:31:00 crc kubenswrapper[4780]: I0219 10:31:00.938615 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:31:00 crc kubenswrapper[4780]: E0219 10:31:00.939624 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:31:13 crc kubenswrapper[4780]: I0219 10:31:13.939609 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:31:13 crc kubenswrapper[4780]: E0219 10:31:13.941058 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:31:26 crc kubenswrapper[4780]: I0219 10:31:26.938965 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:31:26 crc kubenswrapper[4780]: E0219 10:31:26.940171 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:31:37 crc kubenswrapper[4780]: I0219 10:31:37.949275 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:31:37 crc kubenswrapper[4780]: E0219 10:31:37.950315 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:31:50 crc kubenswrapper[4780]: I0219 10:31:50.938489 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:31:50 crc kubenswrapper[4780]: E0219 10:31:50.939611 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:32:02 crc kubenswrapper[4780]: I0219 10:32:02.940033 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:32:02 crc kubenswrapper[4780]: E0219 10:32:02.941399 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:32:15 crc kubenswrapper[4780]: I0219 10:32:15.939703 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:32:15 crc kubenswrapper[4780]: E0219 10:32:15.941341 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:32:30 crc kubenswrapper[4780]: I0219 10:32:30.939112 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:32:30 crc kubenswrapper[4780]: E0219 10:32:30.940379 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:32:42 crc kubenswrapper[4780]: I0219 10:32:42.940793 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:32:42 crc kubenswrapper[4780]: E0219 10:32:42.942005 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:32:55 crc kubenswrapper[4780]: I0219 10:32:55.944706 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:32:55 crc kubenswrapper[4780]: E0219 10:32:55.946145 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:33:06 crc kubenswrapper[4780]: I0219 10:33:06.940297 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:33:06 crc kubenswrapper[4780]: E0219 10:33:06.941235 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:33:15 crc kubenswrapper[4780]: I0219 10:33:15.327022 4780 generic.go:334] "Generic (PLEG): container finished" podID="e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" containerID="18eef1c0c0642e1d7d418e09e7f6f5c09f25baa27d4ac989c9a220f525d9ffb9" exitCode=0 Feb 19 10:33:15 crc kubenswrapper[4780]: I0219 10:33:15.327113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" event={"ID":"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de","Type":"ContainerDied","Data":"18eef1c0c0642e1d7d418e09e7f6f5c09f25baa27d4ac989c9a220f525d9ffb9"} Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.031398 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.164404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.164876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.165020 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lbq6\" (UniqueName: \"kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.165235 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.165434 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.165496 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle\") pod \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\" (UID: \"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de\") " Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.173431 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6" (OuterVolumeSpecName: "kube-api-access-2lbq6") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "kube-api-access-2lbq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.173711 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.174920 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph" (OuterVolumeSpecName: "ceph") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.203572 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory" (OuterVolumeSpecName: "inventory") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.203856 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.206170 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" (UID: "e1ea75b8-e3be-4982-ad66-e85b3ae2a8de"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269399 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lbq6\" (UniqueName: \"kubernetes.io/projected/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-kube-api-access-2lbq6\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269448 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269462 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269477 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269586 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.269604 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1ea75b8-e3be-4982-ad66-e85b3ae2a8de-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.351361 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" event={"ID":"e1ea75b8-e3be-4982-ad66-e85b3ae2a8de","Type":"ContainerDied","Data":"b7abfebb8501522dc576ca90d6b0baba0b07451843f07539adc32922f22d7c89"} Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.351417 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7abfebb8501522dc576ca90d6b0baba0b07451843f07539adc32922f22d7c89" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.351431 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-crkmk" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.477389 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-96mpg"] Feb 19 10:33:17 crc kubenswrapper[4780]: E0219 10:33:17.478220 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a41d8bd-2e1f-4440-997d-bdad1f27cc52" containerName="collect-profiles" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.478243 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a41d8bd-2e1f-4440-997d-bdad1f27cc52" containerName="collect-profiles" Feb 19 10:33:17 crc kubenswrapper[4780]: E0219 10:33:17.478283 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" containerName="libvirt-openstack-openstack-cell1" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.478292 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" containerName="libvirt-openstack-openstack-cell1" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.478531 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ea75b8-e3be-4982-ad66-e85b3ae2a8de" containerName="libvirt-openstack-openstack-cell1" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.478550 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a41d8bd-2e1f-4440-997d-bdad1f27cc52" containerName="collect-profiles" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.479593 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.484484 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.484873 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.485891 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.486218 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.486350 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.488620 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.488912 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.492341 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-96mpg"] Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576672 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576743 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86gp\" (UniqueName: \"kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576926 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.576962 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577078 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577183 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577340 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577424 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577482 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.577583 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679239 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679275 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679317 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679410 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679495 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86gp\" (UniqueName: \"kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679574 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679622 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679649 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.679677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.681362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.683033 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.686099 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.686224 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.687025 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.687582 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.687658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.688737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.688845 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.689572 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.690945 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.693737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.696612 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86gp\" (UniqueName: \"kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp\") pod \"nova-cell1-openstack-openstack-cell1-96mpg\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:17 crc kubenswrapper[4780]: I0219 10:33:17.805497 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:33:18 crc kubenswrapper[4780]: I0219 10:33:18.409006 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:33:18 crc kubenswrapper[4780]: I0219 10:33:18.415796 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-96mpg"] Feb 19 10:33:19 crc kubenswrapper[4780]: I0219 10:33:19.378840 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" event={"ID":"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56","Type":"ContainerStarted","Data":"96ef69df2215d7a54a7732a50c56c65db143823b5f8d82f38c917c007c15d58f"} Feb 19 10:33:19 crc kubenswrapper[4780]: I0219 10:33:19.379333 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" event={"ID":"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56","Type":"ContainerStarted","Data":"91fd977b6f900b912556b56e9a5a13a275c1ab6071932192d1fb5dd5274d198f"} Feb 19 10:33:19 crc kubenswrapper[4780]: I0219 10:33:19.404769 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" podStartSLOduration=1.9440939529999999 podStartE2EDuration="2.404737422s" podCreationTimestamp="2026-02-19 10:33:17 +0000 UTC" firstStartedPulling="2026-02-19 10:33:18.408776598 +0000 UTC m=+7941.152434047" lastFinishedPulling="2026-02-19 10:33:18.869420067 +0000 UTC m=+7941.613077516" observedRunningTime="2026-02-19 10:33:19.399953399 +0000 UTC m=+7942.143610858" watchObservedRunningTime="2026-02-19 10:33:19.404737422 +0000 UTC m=+7942.148394861" Feb 19 10:33:20 crc kubenswrapper[4780]: I0219 10:33:20.939261 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:33:20 crc kubenswrapper[4780]: E0219 10:33:20.940200 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:33:35 crc kubenswrapper[4780]: I0219 10:33:35.939081 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:33:35 crc kubenswrapper[4780]: E0219 10:33:35.940257 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.623301 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.627074 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.669255 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.697253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.698665 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhhb\" (UniqueName: \"kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.698896 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.801508 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.801657 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwhhb\" (UniqueName: \"kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.801730 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.802343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.802499 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.825209 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwhhb\" (UniqueName: \"kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb\") pod \"redhat-marketplace-j8hpv\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:37 crc kubenswrapper[4780]: I0219 10:33:37.958981 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:38 crc kubenswrapper[4780]: W0219 10:33:38.524797 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5c950cd_d808_4280_9b67_8b2fb4af5107.slice/crio-a120c487a0ef653f1c8a3efd5ded1bb22f6c2870a90ca9e5bcdd84793f4d53ff WatchSource:0}: Error finding container a120c487a0ef653f1c8a3efd5ded1bb22f6c2870a90ca9e5bcdd84793f4d53ff: Status 404 returned error can't find the container with id a120c487a0ef653f1c8a3efd5ded1bb22f6c2870a90ca9e5bcdd84793f4d53ff Feb 19 10:33:38 crc kubenswrapper[4780]: I0219 10:33:38.524952 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:38 crc kubenswrapper[4780]: I0219 10:33:38.609435 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerStarted","Data":"a120c487a0ef653f1c8a3efd5ded1bb22f6c2870a90ca9e5bcdd84793f4d53ff"} Feb 19 10:33:39 crc kubenswrapper[4780]: I0219 10:33:39.624819 4780 generic.go:334] "Generic (PLEG): container finished" podID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerID="ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293" exitCode=0 Feb 19 10:33:39 crc kubenswrapper[4780]: I0219 10:33:39.624966 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerDied","Data":"ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293"} Feb 19 10:33:40 crc kubenswrapper[4780]: I0219 10:33:40.642159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerStarted","Data":"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d"} Feb 19 10:33:41 crc kubenswrapper[4780]: I0219 10:33:41.685018 4780 generic.go:334] "Generic (PLEG): container finished" podID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerID="9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d" exitCode=0 Feb 19 10:33:41 crc kubenswrapper[4780]: I0219 10:33:41.685323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerDied","Data":"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d"} Feb 19 10:33:42 crc kubenswrapper[4780]: I0219 10:33:42.697012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerStarted","Data":"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed"} Feb 19 10:33:42 crc kubenswrapper[4780]: I0219 10:33:42.742925 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8hpv" podStartSLOduration=3.248064121 podStartE2EDuration="5.742895024s" podCreationTimestamp="2026-02-19 10:33:37 +0000 UTC" firstStartedPulling="2026-02-19 10:33:39.627920109 +0000 UTC m=+7962.371577558" lastFinishedPulling="2026-02-19 10:33:42.122751012 +0000 UTC m=+7964.866408461" observedRunningTime="2026-02-19 10:33:42.730974953 +0000 UTC m=+7965.474632402" watchObservedRunningTime="2026-02-19 10:33:42.742895024 +0000 UTC m=+7965.486552473" Feb 19 10:33:47 crc kubenswrapper[4780]: I0219 10:33:47.960051 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:47 crc kubenswrapper[4780]: I0219 10:33:47.960699 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:48 crc kubenswrapper[4780]: I0219 10:33:48.012356 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:48 crc kubenswrapper[4780]: I0219 10:33:48.843161 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:48 crc kubenswrapper[4780]: I0219 10:33:48.905039 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:48 crc kubenswrapper[4780]: I0219 10:33:48.938763 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:33:48 crc kubenswrapper[4780]: E0219 10:33:48.939063 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.666692 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.670414 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.684140 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.776586 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.777199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.777370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtjl\" (UniqueName: \"kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.804318 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8hpv" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="registry-server" containerID="cri-o://8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed" gracePeriod=2 Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.879774 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.879896 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rtjl\" (UniqueName: \"kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.879948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.880495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.880505 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:50 crc kubenswrapper[4780]: I0219 10:33:50.901885 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rtjl\" (UniqueName: \"kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl\") pod \"redhat-operators-mzs8k\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.008548 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.354088 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.514229 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities\") pod \"f5c950cd-d808-4280-9b67-8b2fb4af5107\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.514389 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwhhb\" (UniqueName: \"kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb\") pod \"f5c950cd-d808-4280-9b67-8b2fb4af5107\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.514445 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content\") pod \"f5c950cd-d808-4280-9b67-8b2fb4af5107\" (UID: \"f5c950cd-d808-4280-9b67-8b2fb4af5107\") " Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.522829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities" (OuterVolumeSpecName: "utilities") pod "f5c950cd-d808-4280-9b67-8b2fb4af5107" (UID: "f5c950cd-d808-4280-9b67-8b2fb4af5107"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.527611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb" (OuterVolumeSpecName: "kube-api-access-pwhhb") pod "f5c950cd-d808-4280-9b67-8b2fb4af5107" (UID: "f5c950cd-d808-4280-9b67-8b2fb4af5107"). InnerVolumeSpecName "kube-api-access-pwhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.557146 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5c950cd-d808-4280-9b67-8b2fb4af5107" (UID: "f5c950cd-d808-4280-9b67-8b2fb4af5107"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.619671 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.619714 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwhhb\" (UniqueName: \"kubernetes.io/projected/f5c950cd-d808-4280-9b67-8b2fb4af5107-kube-api-access-pwhhb\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.619729 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c950cd-d808-4280-9b67-8b2fb4af5107-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.677586 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.826343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerStarted","Data":"c988f3916487ba1650aad38daecb028680f97f72123dae069220ea6195e2e26c"} Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.840247 4780 generic.go:334] "Generic (PLEG): container finished" podID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerID="8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed" exitCode=0 Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.840306 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerDied","Data":"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed"} Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.840342 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8hpv" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.840364 4780 scope.go:117] "RemoveContainer" containerID="8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.840350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8hpv" event={"ID":"f5c950cd-d808-4280-9b67-8b2fb4af5107","Type":"ContainerDied","Data":"a120c487a0ef653f1c8a3efd5ded1bb22f6c2870a90ca9e5bcdd84793f4d53ff"} Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.884094 4780 scope.go:117] "RemoveContainer" containerID="9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.901213 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.923121 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8hpv"] Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.961763 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" path="/var/lib/kubelet/pods/f5c950cd-d808-4280-9b67-8b2fb4af5107/volumes" Feb 19 10:33:51 crc kubenswrapper[4780]: I0219 10:33:51.987295 4780 scope.go:117] "RemoveContainer" containerID="ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.108453 4780 scope.go:117] "RemoveContainer" containerID="8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed" Feb 19 10:33:52 crc kubenswrapper[4780]: E0219 10:33:52.109023 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed\": container with ID starting with 8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed not found: ID does not exist" containerID="8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.109062 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed"} err="failed to get container status \"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed\": rpc error: code = NotFound desc = could not find container \"8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed\": container with ID starting with 8c17d21fbcdc32baaae50db6f0543b5dc8635dbe533933de8ce392c9be54daed not found: ID does not exist" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.109089 4780 scope.go:117] "RemoveContainer" containerID="9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d" Feb 19 10:33:52 crc kubenswrapper[4780]: E0219 10:33:52.109680 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d\": container with ID starting with 9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d not found: ID does not exist" containerID="9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.109703 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d"} err="failed to get container status \"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d\": rpc error: code = NotFound desc = could not find container \"9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d\": container with ID starting with 9a7a531221a26251e8393a63024c5aedf190fc496f601df7e15452c1c6a21c6d not found: ID does not exist" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.109721 4780 scope.go:117] "RemoveContainer" containerID="ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293" Feb 19 10:33:52 crc kubenswrapper[4780]: E0219 10:33:52.109987 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293\": container with ID starting with ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293 not found: ID does not exist" containerID="ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.110011 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293"} err="failed to get container status \"ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293\": rpc error: code = NotFound desc = could not find container \"ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293\": container with ID starting with ca3598515ab3018600bb70f9e0301b49db568e4a367c09c42dbbb2c962a56293 not found: ID does not exist" Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.854181 4780 generic.go:334] "Generic (PLEG): container finished" podID="784cd915-1a24-42fa-9cfa-0edac428b908" containerID="a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3" exitCode=0 Feb 19 10:33:52 crc kubenswrapper[4780]: I0219 10:33:52.854273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerDied","Data":"a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3"} Feb 19 10:33:54 crc kubenswrapper[4780]: I0219 10:33:54.879083 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerStarted","Data":"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb"} Feb 19 10:34:00 crc kubenswrapper[4780]: I0219 10:34:00.969320 4780 generic.go:334] "Generic (PLEG): container finished" podID="784cd915-1a24-42fa-9cfa-0edac428b908" containerID="434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb" exitCode=0 Feb 19 10:34:00 crc kubenswrapper[4780]: I0219 10:34:00.969430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerDied","Data":"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb"} Feb 19 10:34:01 crc kubenswrapper[4780]: I0219 10:34:01.994594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerStarted","Data":"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724"} Feb 19 10:34:02 crc kubenswrapper[4780]: I0219 10:34:02.016431 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mzs8k" podStartSLOduration=3.387781262 podStartE2EDuration="12.016396904s" podCreationTimestamp="2026-02-19 10:33:50 +0000 UTC" firstStartedPulling="2026-02-19 10:33:52.856248691 +0000 UTC m=+7975.599906140" lastFinishedPulling="2026-02-19 10:34:01.484864333 +0000 UTC m=+7984.228521782" observedRunningTime="2026-02-19 10:34:02.014410425 +0000 UTC m=+7984.758067874" watchObservedRunningTime="2026-02-19 10:34:02.016396904 +0000 UTC m=+7984.760054353" Feb 19 10:34:03 crc kubenswrapper[4780]: I0219 10:34:03.939088 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:34:03 crc kubenswrapper[4780]: E0219 10:34:03.952810 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:34:11 crc kubenswrapper[4780]: I0219 10:34:11.010566 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:11 crc kubenswrapper[4780]: I0219 10:34:11.011113 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:11 crc kubenswrapper[4780]: I0219 10:34:11.076281 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:11 crc kubenswrapper[4780]: I0219 10:34:11.155952 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:11 crc kubenswrapper[4780]: I0219 10:34:11.331257 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.125916 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mzs8k" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="registry-server" containerID="cri-o://f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724" gracePeriod=2 Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.712631 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.741323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content\") pod \"784cd915-1a24-42fa-9cfa-0edac428b908\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.741578 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rtjl\" (UniqueName: \"kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl\") pod \"784cd915-1a24-42fa-9cfa-0edac428b908\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.741732 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities\") pod \"784cd915-1a24-42fa-9cfa-0edac428b908\" (UID: \"784cd915-1a24-42fa-9cfa-0edac428b908\") " Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.744163 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities" (OuterVolumeSpecName: "utilities") pod "784cd915-1a24-42fa-9cfa-0edac428b908" (UID: "784cd915-1a24-42fa-9cfa-0edac428b908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.750116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl" (OuterVolumeSpecName: "kube-api-access-5rtjl") pod "784cd915-1a24-42fa-9cfa-0edac428b908" (UID: "784cd915-1a24-42fa-9cfa-0edac428b908"). InnerVolumeSpecName "kube-api-access-5rtjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.846028 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.846085 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rtjl\" (UniqueName: \"kubernetes.io/projected/784cd915-1a24-42fa-9cfa-0edac428b908-kube-api-access-5rtjl\") on node \"crc\" DevicePath \"\"" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.891731 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "784cd915-1a24-42fa-9cfa-0edac428b908" (UID: "784cd915-1a24-42fa-9cfa-0edac428b908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:34:13 crc kubenswrapper[4780]: I0219 10:34:13.948709 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/784cd915-1a24-42fa-9cfa-0edac428b908-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.141095 4780 generic.go:334] "Generic (PLEG): container finished" podID="784cd915-1a24-42fa-9cfa-0edac428b908" containerID="f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724" exitCode=0 Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.141155 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerDied","Data":"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724"} Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.141206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzs8k" event={"ID":"784cd915-1a24-42fa-9cfa-0edac428b908","Type":"ContainerDied","Data":"c988f3916487ba1650aad38daecb028680f97f72123dae069220ea6195e2e26c"} Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.141236 4780 scope.go:117] "RemoveContainer" containerID="f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.141240 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzs8k" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.168902 4780 scope.go:117] "RemoveContainer" containerID="434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.181475 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.194205 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mzs8k"] Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.204822 4780 scope.go:117] "RemoveContainer" containerID="a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.249757 4780 scope.go:117] "RemoveContainer" containerID="f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724" Feb 19 10:34:14 crc kubenswrapper[4780]: E0219 10:34:14.250344 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724\": container with ID starting with f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724 not found: ID does not exist" containerID="f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.250389 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724"} err="failed to get container status \"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724\": rpc error: code = NotFound desc = could not find container \"f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724\": container with ID starting with f6592777b703d458a076c44c9e63f46ad5f1fffd5d2dc47221fccfcfafd39724 not found: ID does not exist" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.250417 4780 scope.go:117] "RemoveContainer" containerID="434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb" Feb 19 10:34:14 crc kubenswrapper[4780]: E0219 10:34:14.252331 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb\": container with ID starting with 434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb not found: ID does not exist" containerID="434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.252410 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb"} err="failed to get container status \"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb\": rpc error: code = NotFound desc = could not find container \"434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb\": container with ID starting with 434b2763d39f8272e1842dfbe7c3e1a3ba15a27ae51139a97167f39e23d056bb not found: ID does not exist" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.252462 4780 scope.go:117] "RemoveContainer" containerID="a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3" Feb 19 10:34:14 crc kubenswrapper[4780]: E0219 10:34:14.252970 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3\": container with ID starting with a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3 not found: ID does not exist" containerID="a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3" Feb 19 10:34:14 crc kubenswrapper[4780]: I0219 10:34:14.253041 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3"} err="failed to get container status \"a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3\": rpc error: code = NotFound desc = could not find container \"a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3\": container with ID starting with a1f2e1fa32bc24b2575e4334c0b5acdb3ea2850a9058980f70d59b0a564c4eb3 not found: ID does not exist" Feb 19 10:34:15 crc kubenswrapper[4780]: I0219 10:34:15.951081 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" path="/var/lib/kubelet/pods/784cd915-1a24-42fa-9cfa-0edac428b908/volumes" Feb 19 10:34:17 crc kubenswrapper[4780]: I0219 10:34:17.947335 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:34:17 crc kubenswrapper[4780]: E0219 10:34:17.949216 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:34:31 crc kubenswrapper[4780]: I0219 10:34:31.940500 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:34:31 crc kubenswrapper[4780]: E0219 10:34:31.941805 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:34:44 crc kubenswrapper[4780]: I0219 10:34:44.939625 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:34:44 crc kubenswrapper[4780]: E0219 10:34:44.940439 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:34:58 crc kubenswrapper[4780]: I0219 10:34:58.939626 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:34:58 crc kubenswrapper[4780]: E0219 10:34:58.942839 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:35:09 crc kubenswrapper[4780]: I0219 10:35:09.940253 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:35:10 crc kubenswrapper[4780]: I0219 10:35:10.855574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45"} Feb 19 10:36:00 crc kubenswrapper[4780]: E0219 10:36:00.317075 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff8dfaa_4716_4e9d_bd6c_d6a5afc2fd56.slice/crio-conmon-96ef69df2215d7a54a7732a50c56c65db143823b5f8d82f38c917c007c15d58f.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:36:00 crc kubenswrapper[4780]: I0219 10:36:00.425111 4780 generic.go:334] "Generic (PLEG): container finished" podID="8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" containerID="96ef69df2215d7a54a7732a50c56c65db143823b5f8d82f38c917c007c15d58f" exitCode=0 Feb 19 10:36:00 crc kubenswrapper[4780]: I0219 10:36:00.425235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" event={"ID":"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56","Type":"ContainerDied","Data":"96ef69df2215d7a54a7732a50c56c65db143823b5f8d82f38c917c007c15d58f"} Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.453905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" event={"ID":"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56","Type":"ContainerDied","Data":"91fd977b6f900b912556b56e9a5a13a275c1ab6071932192d1fb5dd5274d198f"} Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.454565 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91fd977b6f900b912556b56e9a5a13a275c1ab6071932192d1fb5dd5274d198f" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.522215 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719036 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719107 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719198 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x86gp\" (UniqueName: \"kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719391 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719468 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719487 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719551 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719626 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719712 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.719805 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1\") pod \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\" (UID: \"8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56\") " Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.726468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp" (OuterVolumeSpecName: "kube-api-access-x86gp") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "kube-api-access-x86gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.726987 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph" (OuterVolumeSpecName: "ceph") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.728242 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.756958 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.757159 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.757273 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.757690 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory" (OuterVolumeSpecName: "inventory") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.758175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.759175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.767391 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.767773 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.775388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.794039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" (UID: "8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823229 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x86gp\" (UniqueName: \"kubernetes.io/projected/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-kube-api-access-x86gp\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823286 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823297 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823307 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823318 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823329 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823341 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823351 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823360 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823369 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823381 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823392 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:02 crc kubenswrapper[4780]: I0219 10:36:02.823402 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.466607 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-96mpg" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.657642 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-bpgr5"] Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671424 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="extract-utilities" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671486 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="extract-utilities" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671532 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671543 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671571 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="extract-content" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671581 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="extract-content" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671606 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671617 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671642 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="extract-utilities" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671651 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="extract-utilities" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671661 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671669 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: E0219 10:36:03.671682 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="extract-content" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.671689 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="extract-content" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.672184 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c950cd-d808-4280-9b67-8b2fb4af5107" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.672211 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="784cd915-1a24-42fa-9cfa-0edac428b908" containerName="registry-server" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.672234 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.673252 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-bpgr5"] Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.673368 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.682099 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.682193 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.682417 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.682431 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.682508 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.851743 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.851800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.851847 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.851992 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.852213 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.852340 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.852407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.852895 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69vn\" (UniqueName: \"kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954515 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954639 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954698 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954722 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.954805 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69vn\" (UniqueName: \"kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.959876 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.959936 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.960501 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.960869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.961221 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.961356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.966761 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:03 crc kubenswrapper[4780]: I0219 10:36:03.976714 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69vn\" (UniqueName: \"kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn\") pod \"telemetry-openstack-openstack-cell1-bpgr5\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:04 crc kubenswrapper[4780]: I0219 10:36:04.000821 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:36:04 crc kubenswrapper[4780]: I0219 10:36:04.607792 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-bpgr5"] Feb 19 10:36:05 crc kubenswrapper[4780]: I0219 10:36:05.499255 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" event={"ID":"48bfa98a-2036-4c70-a61d-11579ff28164","Type":"ContainerStarted","Data":"e2cb08a8fa9d735af80a2e713115e0180fbdbd3a818de440463ec37341366944"} Feb 19 10:36:05 crc kubenswrapper[4780]: I0219 10:36:05.499627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" event={"ID":"48bfa98a-2036-4c70-a61d-11579ff28164","Type":"ContainerStarted","Data":"82c8443ccf0c2dc34c50211470364745fe5c7f93b39fdd954b1244f05d4146dd"} Feb 19 10:36:05 crc kubenswrapper[4780]: I0219 10:36:05.533491 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" podStartSLOduration=2.112424022 podStartE2EDuration="2.533461417s" podCreationTimestamp="2026-02-19 10:36:03 +0000 UTC" firstStartedPulling="2026-02-19 10:36:04.616454027 +0000 UTC m=+8107.360111476" lastFinishedPulling="2026-02-19 10:36:05.037491412 +0000 UTC m=+8107.781148871" observedRunningTime="2026-02-19 10:36:05.528692848 +0000 UTC m=+8108.272350337" watchObservedRunningTime="2026-02-19 10:36:05.533461417 +0000 UTC m=+8108.277118876" Feb 19 10:37:36 crc kubenswrapper[4780]: I0219 10:37:36.336754 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:37:36 crc kubenswrapper[4780]: I0219 10:37:36.337375 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:38:06 crc kubenswrapper[4780]: I0219 10:38:06.336368 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:38:06 crc kubenswrapper[4780]: I0219 10:38:06.337000 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:38:36 crc kubenswrapper[4780]: I0219 10:38:36.336016 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:38:36 crc kubenswrapper[4780]: I0219 10:38:36.336612 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:38:36 crc kubenswrapper[4780]: I0219 10:38:36.336680 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:38:36 crc kubenswrapper[4780]: I0219 10:38:36.337842 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:38:36 crc kubenswrapper[4780]: I0219 10:38:36.337921 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45" gracePeriod=600 Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.319279 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45" exitCode=0 Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.319352 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45"} Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.319792 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e"} Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.319832 4780 scope.go:117] "RemoveContainer" containerID="608a6aca86ec6d714a96831d62d791cd4270512ab912d8de49dade8c7a71cf89" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.504994 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.507673 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.521379 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.697003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.697085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.697477 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqvm\" (UniqueName: \"kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.800612 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqvm\" (UniqueName: \"kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.800835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.800866 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.801498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.801498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.826632 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqvm\" (UniqueName: \"kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm\") pod \"community-operators-n7hm4\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:37 crc kubenswrapper[4780]: I0219 10:38:37.839870 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:38 crc kubenswrapper[4780]: I0219 10:38:38.521080 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:39 crc kubenswrapper[4780]: I0219 10:38:39.345709 4780 generic.go:334] "Generic (PLEG): container finished" podID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerID="46fec74ff2c618843e01cd047d07d6c9d3bbbd1bb124335990de8a1c4326d74d" exitCode=0 Feb 19 10:38:39 crc kubenswrapper[4780]: I0219 10:38:39.345818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerDied","Data":"46fec74ff2c618843e01cd047d07d6c9d3bbbd1bb124335990de8a1c4326d74d"} Feb 19 10:38:39 crc kubenswrapper[4780]: I0219 10:38:39.346393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerStarted","Data":"d8126855870bbfae8804b1f2857f12a508a40f176314712ae6255064067264a7"} Feb 19 10:38:39 crc kubenswrapper[4780]: I0219 10:38:39.349196 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:38:40 crc kubenswrapper[4780]: I0219 10:38:40.359675 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerStarted","Data":"4101b3e7eef21d6a3b6f52804457890f931791c91656f9d99659197402941cb2"} Feb 19 10:38:41 crc kubenswrapper[4780]: I0219 10:38:41.373252 4780 generic.go:334] "Generic (PLEG): container finished" podID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerID="4101b3e7eef21d6a3b6f52804457890f931791c91656f9d99659197402941cb2" exitCode=0 Feb 19 10:38:41 crc kubenswrapper[4780]: I0219 10:38:41.373423 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerDied","Data":"4101b3e7eef21d6a3b6f52804457890f931791c91656f9d99659197402941cb2"} Feb 19 10:38:42 crc kubenswrapper[4780]: I0219 10:38:42.388845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerStarted","Data":"0463b6997fc28ef4b9c10eeba41607da39619283b98115a286eb90706b9764e4"} Feb 19 10:38:42 crc kubenswrapper[4780]: I0219 10:38:42.427643 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n7hm4" podStartSLOduration=2.99398908 podStartE2EDuration="5.427613972s" podCreationTimestamp="2026-02-19 10:38:37 +0000 UTC" firstStartedPulling="2026-02-19 10:38:39.348937228 +0000 UTC m=+8262.092594677" lastFinishedPulling="2026-02-19 10:38:41.78256212 +0000 UTC m=+8264.526219569" observedRunningTime="2026-02-19 10:38:42.413739352 +0000 UTC m=+8265.157396801" watchObservedRunningTime="2026-02-19 10:38:42.427613972 +0000 UTC m=+8265.171271421" Feb 19 10:38:47 crc kubenswrapper[4780]: I0219 10:38:47.840103 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:47 crc kubenswrapper[4780]: I0219 10:38:47.841296 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:47 crc kubenswrapper[4780]: I0219 10:38:47.897011 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:48 crc kubenswrapper[4780]: I0219 10:38:48.523677 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:48 crc kubenswrapper[4780]: I0219 10:38:48.624982 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:50 crc kubenswrapper[4780]: I0219 10:38:50.476018 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n7hm4" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="registry-server" containerID="cri-o://0463b6997fc28ef4b9c10eeba41607da39619283b98115a286eb90706b9764e4" gracePeriod=2 Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.489285 4780 generic.go:334] "Generic (PLEG): container finished" podID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerID="0463b6997fc28ef4b9c10eeba41607da39619283b98115a286eb90706b9764e4" exitCode=0 Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.489488 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerDied","Data":"0463b6997fc28ef4b9c10eeba41607da39619283b98115a286eb90706b9764e4"} Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.489861 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7hm4" event={"ID":"24d230ad-1004-48b9-880f-9e8db8bd49c1","Type":"ContainerDied","Data":"d8126855870bbfae8804b1f2857f12a508a40f176314712ae6255064067264a7"} Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.489883 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8126855870bbfae8804b1f2857f12a508a40f176314712ae6255064067264a7" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.535171 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.681523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqvm\" (UniqueName: \"kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm\") pod \"24d230ad-1004-48b9-880f-9e8db8bd49c1\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.681792 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content\") pod \"24d230ad-1004-48b9-880f-9e8db8bd49c1\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.681987 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities\") pod \"24d230ad-1004-48b9-880f-9e8db8bd49c1\" (UID: \"24d230ad-1004-48b9-880f-9e8db8bd49c1\") " Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.683218 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities" (OuterVolumeSpecName: "utilities") pod "24d230ad-1004-48b9-880f-9e8db8bd49c1" (UID: "24d230ad-1004-48b9-880f-9e8db8bd49c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.711301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm" (OuterVolumeSpecName: "kube-api-access-tpqvm") pod "24d230ad-1004-48b9-880f-9e8db8bd49c1" (UID: "24d230ad-1004-48b9-880f-9e8db8bd49c1"). InnerVolumeSpecName "kube-api-access-tpqvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.754839 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24d230ad-1004-48b9-880f-9e8db8bd49c1" (UID: "24d230ad-1004-48b9-880f-9e8db8bd49c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.785636 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpqvm\" (UniqueName: \"kubernetes.io/projected/24d230ad-1004-48b9-880f-9e8db8bd49c1-kube-api-access-tpqvm\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.785917 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:51 crc kubenswrapper[4780]: I0219 10:38:51.785933 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d230ad-1004-48b9-880f-9e8db8bd49c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:52 crc kubenswrapper[4780]: I0219 10:38:52.504975 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7hm4" Feb 19 10:38:52 crc kubenswrapper[4780]: I0219 10:38:52.556406 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:52 crc kubenswrapper[4780]: I0219 10:38:52.570263 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n7hm4"] Feb 19 10:38:53 crc kubenswrapper[4780]: I0219 10:38:53.957265 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" path="/var/lib/kubelet/pods/24d230ad-1004-48b9-880f-9e8db8bd49c1/volumes" Feb 19 10:39:29 crc kubenswrapper[4780]: I0219 10:39:29.943514 4780 generic.go:334] "Generic (PLEG): container finished" podID="48bfa98a-2036-4c70-a61d-11579ff28164" containerID="e2cb08a8fa9d735af80a2e713115e0180fbdbd3a818de440463ec37341366944" exitCode=0 Feb 19 10:39:29 crc kubenswrapper[4780]: I0219 10:39:29.956996 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" event={"ID":"48bfa98a-2036-4c70-a61d-11579ff28164","Type":"ContainerDied","Data":"e2cb08a8fa9d735af80a2e713115e0180fbdbd3a818de440463ec37341366944"} Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.313194 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:30 crc kubenswrapper[4780]: E0219 10:39:30.313901 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="registry-server" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.313926 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="registry-server" Feb 19 10:39:30 crc kubenswrapper[4780]: E0219 10:39:30.313954 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="extract-utilities" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.313963 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="extract-utilities" Feb 19 10:39:30 crc kubenswrapper[4780]: E0219 10:39:30.313979 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="extract-content" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.313988 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="extract-content" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.314405 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d230ad-1004-48b9-880f-9e8db8bd49c1" containerName="registry-server" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.316586 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.329698 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.356706 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.356880 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.357039 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69skj\" (UniqueName: \"kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.459425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.459588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69skj\" (UniqueName: \"kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.459678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.460336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.460447 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.482022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69skj\" (UniqueName: \"kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj\") pod \"certified-operators-lb2rz\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:30 crc kubenswrapper[4780]: I0219 10:39:30.642800 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.254521 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.701572 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803157 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803319 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803524 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69vn\" (UniqueName: \"kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803640 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803757 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.803865 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle\") pod \"48bfa98a-2036-4c70-a61d-11579ff28164\" (UID: \"48bfa98a-2036-4c70-a61d-11579ff28164\") " Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.846399 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn" (OuterVolumeSpecName: "kube-api-access-j69vn") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "kube-api-access-j69vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.846542 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.847087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph" (OuterVolumeSpecName: "ceph") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.919562 4780 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.919602 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j69vn\" (UniqueName: \"kubernetes.io/projected/48bfa98a-2036-4c70-a61d-11579ff28164-kube-api-access-j69vn\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.919614 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.927798 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.929445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.941348 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory" (OuterVolumeSpecName: "inventory") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.948839 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.959671 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "48bfa98a-2036-4c70-a61d-11579ff28164" (UID: "48bfa98a-2036-4c70-a61d-11579ff28164"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.974042 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.976241 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-bpgr5" event={"ID":"48bfa98a-2036-4c70-a61d-11579ff28164","Type":"ContainerDied","Data":"82c8443ccf0c2dc34c50211470364745fe5c7f93b39fdd954b1244f05d4146dd"} Feb 19 10:39:31 crc kubenswrapper[4780]: I0219 10:39:31.976307 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c8443ccf0c2dc34c50211470364745fe5c7f93b39fdd954b1244f05d4146dd" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:31.999996 4780 generic.go:334] "Generic (PLEG): container finished" podID="02af48c5-243c-4cd6-91c9-5422939abad0" containerID="d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450" exitCode=0 Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.000061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerDied","Data":"d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450"} Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.000096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerStarted","Data":"bf1796bc316d895c67bf6c5d703f42fb87def912d7ca250ffccc4e3eca409bb8"} Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.023981 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.024403 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.024503 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.024584 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.024657 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48bfa98a-2036-4c70-a61d-11579ff28164-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.141749 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-8ghz7"] Feb 19 10:39:32 crc kubenswrapper[4780]: E0219 10:39:32.142479 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bfa98a-2036-4c70-a61d-11579ff28164" containerName="telemetry-openstack-openstack-cell1" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.142510 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bfa98a-2036-4c70-a61d-11579ff28164" containerName="telemetry-openstack-openstack-cell1" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.142749 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bfa98a-2036-4c70-a61d-11579ff28164" containerName="telemetry-openstack-openstack-cell1" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.143829 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.147602 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.148072 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.148729 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.149260 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.149939 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.152963 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-8ghz7"] Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229225 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229331 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229367 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5sk\" (UniqueName: \"kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.229466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332528 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332607 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332671 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332709 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332758 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5sk\" (UniqueName: \"kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.332845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.338341 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.338388 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.338470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.338658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.339798 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.357152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5sk\" (UniqueName: \"kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk\") pod \"neutron-sriov-openstack-openstack-cell1-8ghz7\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:32 crc kubenswrapper[4780]: I0219 10:39:32.465008 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:39:33 crc kubenswrapper[4780]: I0219 10:39:33.019935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerStarted","Data":"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d"} Feb 19 10:39:33 crc kubenswrapper[4780]: W0219 10:39:33.214522 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode64ad28f_dcfa_4fca_b69c_53cec95474d9.slice/crio-07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db WatchSource:0}: Error finding container 07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db: Status 404 returned error can't find the container with id 07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db Feb 19 10:39:33 crc kubenswrapper[4780]: I0219 10:39:33.216325 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-8ghz7"] Feb 19 10:39:34 crc kubenswrapper[4780]: I0219 10:39:34.036468 4780 generic.go:334] "Generic (PLEG): container finished" podID="02af48c5-243c-4cd6-91c9-5422939abad0" containerID="a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d" exitCode=0 Feb 19 10:39:34 crc kubenswrapper[4780]: I0219 10:39:34.036536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerDied","Data":"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d"} Feb 19 10:39:34 crc kubenswrapper[4780]: I0219 10:39:34.041771 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" event={"ID":"e64ad28f-dcfa-4fca-b69c-53cec95474d9","Type":"ContainerStarted","Data":"e3a2d34aea700dbbc7d5b2d0a3934f15e45e11aabfb34100e199d9b14c9aa25a"} Feb 19 10:39:34 crc kubenswrapper[4780]: I0219 10:39:34.041817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" event={"ID":"e64ad28f-dcfa-4fca-b69c-53cec95474d9","Type":"ContainerStarted","Data":"07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db"} Feb 19 10:39:34 crc kubenswrapper[4780]: I0219 10:39:34.087506 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" podStartSLOduration=1.633029711 podStartE2EDuration="2.087485965s" podCreationTimestamp="2026-02-19 10:39:32 +0000 UTC" firstStartedPulling="2026-02-19 10:39:33.217880831 +0000 UTC m=+8315.961538280" lastFinishedPulling="2026-02-19 10:39:33.672337085 +0000 UTC m=+8316.415994534" observedRunningTime="2026-02-19 10:39:34.086370899 +0000 UTC m=+8316.830028348" watchObservedRunningTime="2026-02-19 10:39:34.087485965 +0000 UTC m=+8316.831143414" Feb 19 10:39:35 crc kubenswrapper[4780]: I0219 10:39:35.057329 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerStarted","Data":"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9"} Feb 19 10:39:35 crc kubenswrapper[4780]: I0219 10:39:35.080609 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lb2rz" podStartSLOduration=2.536247351 podStartE2EDuration="5.080587942s" podCreationTimestamp="2026-02-19 10:39:30 +0000 UTC" firstStartedPulling="2026-02-19 10:39:32.013065025 +0000 UTC m=+8314.756722474" lastFinishedPulling="2026-02-19 10:39:34.557405616 +0000 UTC m=+8317.301063065" observedRunningTime="2026-02-19 10:39:35.077764405 +0000 UTC m=+8317.821421854" watchObservedRunningTime="2026-02-19 10:39:35.080587942 +0000 UTC m=+8317.824245391" Feb 19 10:39:40 crc kubenswrapper[4780]: I0219 10:39:40.643624 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:40 crc kubenswrapper[4780]: I0219 10:39:40.644146 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:40 crc kubenswrapper[4780]: I0219 10:39:40.708325 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:41 crc kubenswrapper[4780]: I0219 10:39:41.178048 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:41 crc kubenswrapper[4780]: I0219 10:39:41.274036 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.137811 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lb2rz" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="registry-server" containerID="cri-o://2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9" gracePeriod=2 Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.706965 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.836758 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content\") pod \"02af48c5-243c-4cd6-91c9-5422939abad0\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.836876 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities\") pod \"02af48c5-243c-4cd6-91c9-5422939abad0\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.837032 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69skj\" (UniqueName: \"kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj\") pod \"02af48c5-243c-4cd6-91c9-5422939abad0\" (UID: \"02af48c5-243c-4cd6-91c9-5422939abad0\") " Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.838319 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities" (OuterVolumeSpecName: "utilities") pod "02af48c5-243c-4cd6-91c9-5422939abad0" (UID: "02af48c5-243c-4cd6-91c9-5422939abad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.843366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj" (OuterVolumeSpecName: "kube-api-access-69skj") pod "02af48c5-243c-4cd6-91c9-5422939abad0" (UID: "02af48c5-243c-4cd6-91c9-5422939abad0"). InnerVolumeSpecName "kube-api-access-69skj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.940679 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:43 crc kubenswrapper[4780]: I0219 10:39:43.940720 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69skj\" (UniqueName: \"kubernetes.io/projected/02af48c5-243c-4cd6-91c9-5422939abad0-kube-api-access-69skj\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.152498 4780 generic.go:334] "Generic (PLEG): container finished" podID="02af48c5-243c-4cd6-91c9-5422939abad0" containerID="2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9" exitCode=0 Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.152564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerDied","Data":"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9"} Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.152606 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lb2rz" event={"ID":"02af48c5-243c-4cd6-91c9-5422939abad0","Type":"ContainerDied","Data":"bf1796bc316d895c67bf6c5d703f42fb87def912d7ca250ffccc4e3eca409bb8"} Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.152635 4780 scope.go:117] "RemoveContainer" containerID="2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.152899 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lb2rz" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.199354 4780 scope.go:117] "RemoveContainer" containerID="a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.274497 4780 scope.go:117] "RemoveContainer" containerID="d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.329146 4780 scope.go:117] "RemoveContainer" containerID="2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9" Feb 19 10:39:44 crc kubenswrapper[4780]: E0219 10:39:44.329974 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9\": container with ID starting with 2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9 not found: ID does not exist" containerID="2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.330060 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9"} err="failed to get container status \"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9\": rpc error: code = NotFound desc = could not find container \"2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9\": container with ID starting with 2ca71bbc48ed70e65d1c42a0f8183bf9a9342f58787e5c02be194f5b931be1f9 not found: ID does not exist" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.330121 4780 scope.go:117] "RemoveContainer" containerID="a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d" Feb 19 10:39:44 crc kubenswrapper[4780]: E0219 10:39:44.333620 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d\": container with ID starting with a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d not found: ID does not exist" containerID="a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.333672 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d"} err="failed to get container status \"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d\": rpc error: code = NotFound desc = could not find container \"a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d\": container with ID starting with a0300c901ae97ce8701959eb4f42530916974862fb49d26cae641f96f2b2ad3d not found: ID does not exist" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.333705 4780 scope.go:117] "RemoveContainer" containerID="d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450" Feb 19 10:39:44 crc kubenswrapper[4780]: E0219 10:39:44.333998 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450\": container with ID starting with d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450 not found: ID does not exist" containerID="d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.334039 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450"} err="failed to get container status \"d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450\": rpc error: code = NotFound desc = could not find container \"d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450\": container with ID starting with d60010fe28bb70198e0cab6c64460f7a00b13c469160eee71c13b199daf47450 not found: ID does not exist" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.717097 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02af48c5-243c-4cd6-91c9-5422939abad0" (UID: "02af48c5-243c-4cd6-91c9-5422939abad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.771048 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02af48c5-243c-4cd6-91c9-5422939abad0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.803252 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:44 crc kubenswrapper[4780]: I0219 10:39:44.814569 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lb2rz"] Feb 19 10:39:45 crc kubenswrapper[4780]: I0219 10:39:45.967701 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" path="/var/lib/kubelet/pods/02af48c5-243c-4cd6-91c9-5422939abad0/volumes" Feb 19 10:40:36 crc kubenswrapper[4780]: I0219 10:40:36.336865 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:40:36 crc kubenswrapper[4780]: I0219 10:40:36.337575 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:40:42 crc kubenswrapper[4780]: I0219 10:40:42.864373 4780 generic.go:334] "Generic (PLEG): container finished" podID="e64ad28f-dcfa-4fca-b69c-53cec95474d9" containerID="e3a2d34aea700dbbc7d5b2d0a3934f15e45e11aabfb34100e199d9b14c9aa25a" exitCode=0 Feb 19 10:40:42 crc kubenswrapper[4780]: I0219 10:40:42.864468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" event={"ID":"e64ad28f-dcfa-4fca-b69c-53cec95474d9","Type":"ContainerDied","Data":"e3a2d34aea700dbbc7d5b2d0a3934f15e45e11aabfb34100e199d9b14c9aa25a"} Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.719326 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.887334 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" event={"ID":"e64ad28f-dcfa-4fca-b69c-53cec95474d9","Type":"ContainerDied","Data":"07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db"} Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.887393 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07b72bf25a7e93b027e7d18403f5f50fb8efe114c2d230d561e56045bc9e68db" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.887396 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-8ghz7" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.890983 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.891913 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.892552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.892687 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.892826 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5sk\" (UniqueName: \"kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.892929 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle\") pod \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\" (UID: \"e64ad28f-dcfa-4fca-b69c-53cec95474d9\") " Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.897701 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph" (OuterVolumeSpecName: "ceph") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.898647 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk" (OuterVolumeSpecName: "kube-api-access-zz5sk") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "kube-api-access-zz5sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.900273 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.928716 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory" (OuterVolumeSpecName: "inventory") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.937425 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:44 crc kubenswrapper[4780]: I0219 10:40:44.951815 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e64ad28f-dcfa-4fca-b69c-53cec95474d9" (UID: "e64ad28f-dcfa-4fca-b69c-53cec95474d9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005571 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005646 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005796 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005855 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5sk\" (UniqueName: \"kubernetes.io/projected/e64ad28f-dcfa-4fca-b69c-53cec95474d9-kube-api-access-zz5sk\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005889 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.005923 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e64ad28f-dcfa-4fca-b69c-53cec95474d9-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.031818 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jdh54"] Feb 19 10:40:45 crc kubenswrapper[4780]: E0219 10:40:45.032487 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="extract-content" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032509 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="extract-content" Feb 19 10:40:45 crc kubenswrapper[4780]: E0219 10:40:45.032535 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64ad28f-dcfa-4fca-b69c-53cec95474d9" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032544 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64ad28f-dcfa-4fca-b69c-53cec95474d9" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 10:40:45 crc kubenswrapper[4780]: E0219 10:40:45.032573 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="extract-utilities" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032580 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="extract-utilities" Feb 19 10:40:45 crc kubenswrapper[4780]: E0219 10:40:45.032612 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="registry-server" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032620 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="registry-server" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032856 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="02af48c5-243c-4cd6-91c9-5422939abad0" containerName="registry-server" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.032932 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64ad28f-dcfa-4fca-b69c-53cec95474d9" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.035728 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.038481 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.047411 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jdh54"] Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.211329 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.211737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.211875 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.212177 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7twt\" (UniqueName: \"kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.212356 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.212422 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.315727 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.315888 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.315923 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.316002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7twt\" (UniqueName: \"kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.316053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.316096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.320688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.321181 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.321286 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.323556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.327293 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.336580 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7twt\" (UniqueName: \"kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt\") pod \"neutron-dhcp-openstack-openstack-cell1-jdh54\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.367324 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:40:45 crc kubenswrapper[4780]: W0219 10:40:45.946280 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96533bdb_10d6_4b37_bbbb_45f209e746d8.slice/crio-9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a WatchSource:0}: Error finding container 9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a: Status 404 returned error can't find the container with id 9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a Feb 19 10:40:45 crc kubenswrapper[4780]: I0219 10:40:45.953632 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jdh54"] Feb 19 10:40:46 crc kubenswrapper[4780]: I0219 10:40:46.911223 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" event={"ID":"96533bdb-10d6-4b37-bbbb-45f209e746d8","Type":"ContainerStarted","Data":"8c03bd755ee631b933bf56752ae0bd7d0659296845c9468a528e1655ed24ae60"} Feb 19 10:40:46 crc kubenswrapper[4780]: I0219 10:40:46.911753 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" event={"ID":"96533bdb-10d6-4b37-bbbb-45f209e746d8","Type":"ContainerStarted","Data":"9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a"} Feb 19 10:40:46 crc kubenswrapper[4780]: I0219 10:40:46.947723 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" podStartSLOduration=2.544897133 podStartE2EDuration="2.947687369s" podCreationTimestamp="2026-02-19 10:40:44 +0000 UTC" firstStartedPulling="2026-02-19 10:40:45.950502006 +0000 UTC m=+8388.694159455" lastFinishedPulling="2026-02-19 10:40:46.353292242 +0000 UTC m=+8389.096949691" observedRunningTime="2026-02-19 10:40:46.934375193 +0000 UTC m=+8389.678032642" watchObservedRunningTime="2026-02-19 10:40:46.947687369 +0000 UTC m=+8389.691344818" Feb 19 10:41:06 crc kubenswrapper[4780]: I0219 10:41:06.336065 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:41:06 crc kubenswrapper[4780]: I0219 10:41:06.336567 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:41:36 crc kubenswrapper[4780]: I0219 10:41:36.336403 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:41:36 crc kubenswrapper[4780]: I0219 10:41:36.336930 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:41:36 crc kubenswrapper[4780]: I0219 10:41:36.336983 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:41:36 crc kubenswrapper[4780]: I0219 10:41:36.338039 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:41:36 crc kubenswrapper[4780]: I0219 10:41:36.338097 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" gracePeriod=600 Feb 19 10:41:36 crc kubenswrapper[4780]: E0219 10:41:36.457969 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:41:37 crc kubenswrapper[4780]: I0219 10:41:37.385072 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" exitCode=0 Feb 19 10:41:37 crc kubenswrapper[4780]: I0219 10:41:37.385170 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e"} Feb 19 10:41:37 crc kubenswrapper[4780]: I0219 10:41:37.385238 4780 scope.go:117] "RemoveContainer" containerID="92afe3e18f1dddee562ddb81addd557a9fa73d7f3c2031a78601cad9a571fc45" Feb 19 10:41:37 crc kubenswrapper[4780]: I0219 10:41:37.388057 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:41:37 crc kubenswrapper[4780]: E0219 10:41:37.388623 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:41:49 crc kubenswrapper[4780]: I0219 10:41:49.939366 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:41:49 crc kubenswrapper[4780]: E0219 10:41:49.940245 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:42:04 crc kubenswrapper[4780]: I0219 10:42:04.939208 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:42:04 crc kubenswrapper[4780]: E0219 10:42:04.940423 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:42:09 crc kubenswrapper[4780]: I0219 10:42:09.753298 4780 generic.go:334] "Generic (PLEG): container finished" podID="96533bdb-10d6-4b37-bbbb-45f209e746d8" containerID="8c03bd755ee631b933bf56752ae0bd7d0659296845c9468a528e1655ed24ae60" exitCode=0 Feb 19 10:42:09 crc kubenswrapper[4780]: I0219 10:42:09.753438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" event={"ID":"96533bdb-10d6-4b37-bbbb-45f209e746d8","Type":"ContainerDied","Data":"8c03bd755ee631b933bf56752ae0bd7d0659296845c9468a528e1655ed24ae60"} Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.288953 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7twt\" (UniqueName: \"kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397562 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397601 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397621 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397784 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.397896 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle\") pod \"96533bdb-10d6-4b37-bbbb-45f209e746d8\" (UID: \"96533bdb-10d6-4b37-bbbb-45f209e746d8\") " Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.404619 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt" (OuterVolumeSpecName: "kube-api-access-b7twt") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "kube-api-access-b7twt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.404779 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.404814 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph" (OuterVolumeSpecName: "ceph") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.429811 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.430250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.431887 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory" (OuterVolumeSpecName: "inventory") pod "96533bdb-10d6-4b37-bbbb-45f209e746d8" (UID: "96533bdb-10d6-4b37-bbbb-45f209e746d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501644 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501693 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501707 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501721 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501736 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7twt\" (UniqueName: \"kubernetes.io/projected/96533bdb-10d6-4b37-bbbb-45f209e746d8-kube-api-access-b7twt\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.501748 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96533bdb-10d6-4b37-bbbb-45f209e746d8-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.776702 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" event={"ID":"96533bdb-10d6-4b37-bbbb-45f209e746d8","Type":"ContainerDied","Data":"9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a"} Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.776920 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b060ca81f22c68dee1ef201a45b9e0cc577bbec9bfb1e28f26009ff4c84144a" Feb 19 10:42:11 crc kubenswrapper[4780]: I0219 10:42:11.776868 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jdh54" Feb 19 10:42:15 crc kubenswrapper[4780]: I0219 10:42:15.938661 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:42:15 crc kubenswrapper[4780]: E0219 10:42:15.939696 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:42:23 crc kubenswrapper[4780]: I0219 10:42:23.779275 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:23 crc kubenswrapper[4780]: I0219 10:42:23.780196 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerName="nova-cell0-conductor-conductor" containerID="cri-o://026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" gracePeriod=30 Feb 19 10:42:23 crc kubenswrapper[4780]: I0219 10:42:23.888859 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:23 crc kubenswrapper[4780]: I0219 10:42:23.889237 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ac835368-9f36-491e-8684-ad0fbd976e4a" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.347002 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.347864 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" containerID="cri-o://1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.360070 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.360446 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-log" containerID="cri-o://eef77213377a0841e0cdfbadd5c5c2cad72d804251ad843f81a3ac8c1eeb9ef5" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.360483 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-api" containerID="cri-o://77e0ab4bc2048225d987e1e6766923342e5d61ad2f0992703e85fee058c481a3" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.425734 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.426048 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-log" containerID="cri-o://ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.426110 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-metadata" containerID="cri-o://aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43" gracePeriod=30 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.473746 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.598849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle\") pod \"ac835368-9f36-491e-8684-ad0fbd976e4a\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.599375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ss8s\" (UniqueName: \"kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s\") pod \"ac835368-9f36-491e-8684-ad0fbd976e4a\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.599528 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data\") pod \"ac835368-9f36-491e-8684-ad0fbd976e4a\" (UID: \"ac835368-9f36-491e-8684-ad0fbd976e4a\") " Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.609306 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s" (OuterVolumeSpecName: "kube-api-access-5ss8s") pod "ac835368-9f36-491e-8684-ad0fbd976e4a" (UID: "ac835368-9f36-491e-8684-ad0fbd976e4a"). InnerVolumeSpecName "kube-api-access-5ss8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.641014 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac835368-9f36-491e-8684-ad0fbd976e4a" (UID: "ac835368-9f36-491e-8684-ad0fbd976e4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.654192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data" (OuterVolumeSpecName: "config-data") pod "ac835368-9f36-491e-8684-ad0fbd976e4a" (UID: "ac835368-9f36-491e-8684-ad0fbd976e4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.703222 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.703267 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ss8s\" (UniqueName: \"kubernetes.io/projected/ac835368-9f36-491e-8684-ad0fbd976e4a-kube-api-access-5ss8s\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.703280 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac835368-9f36-491e-8684-ad0fbd976e4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.983338 4780 generic.go:334] "Generic (PLEG): container finished" podID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerID="ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144" exitCode=143 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.983466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerDied","Data":"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144"} Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.986043 4780 generic.go:334] "Generic (PLEG): container finished" podID="ac835368-9f36-491e-8684-ad0fbd976e4a" containerID="a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41" exitCode=0 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.986161 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac835368-9f36-491e-8684-ad0fbd976e4a","Type":"ContainerDied","Data":"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41"} Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.986186 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.986217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ac835368-9f36-491e-8684-ad0fbd976e4a","Type":"ContainerDied","Data":"e25f04b6be1524eaab70545f27becc8d535ef022caa744c64684c1a64183a3b0"} Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.986253 4780 scope.go:117] "RemoveContainer" containerID="a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41" Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.991080 4780 generic.go:334] "Generic (PLEG): container finished" podID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerID="eef77213377a0841e0cdfbadd5c5c2cad72d804251ad843f81a3ac8c1eeb9ef5" exitCode=143 Feb 19 10:42:25 crc kubenswrapper[4780]: I0219 10:42:25.991110 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerDied","Data":"eef77213377a0841e0cdfbadd5c5c2cad72d804251ad843f81a3ac8c1eeb9ef5"} Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.015596 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.050194 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.058271 4780 scope.go:117] "RemoveContainer" containerID="a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41" Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.060095 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41\": container with ID starting with a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41 not found: ID does not exist" containerID="a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.060178 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41"} err="failed to get container status \"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41\": rpc error: code = NotFound desc = could not find container \"a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41\": container with ID starting with a1d1efa66c726f1fc52dd1f86841dca6ff835cc691673e5c7bf3eb33f53f6c41 not found: ID does not exist" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.069972 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.070671 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac835368-9f36-491e-8684-ad0fbd976e4a" containerName="nova-cell1-conductor-conductor" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.070696 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac835368-9f36-491e-8684-ad0fbd976e4a" containerName="nova-cell1-conductor-conductor" Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.070739 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96533bdb-10d6-4b37-bbbb-45f209e746d8" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.070751 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="96533bdb-10d6-4b37-bbbb-45f209e746d8" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.071016 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="96533bdb-10d6-4b37-bbbb-45f209e746d8" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.071053 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac835368-9f36-491e-8684-ad0fbd976e4a" containerName="nova-cell1-conductor-conductor" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.072219 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.075782 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.085983 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.220074 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.220628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.220694 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/1fc78037-0a58-45a0-9beb-445eb1327707-kube-api-access-nj4zz\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.273017 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.278510 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.280139 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:26 crc kubenswrapper[4780]: E0219 10:42:26.280259 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.323903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.324020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/1fc78037-0a58-45a0-9beb-445eb1327707-kube-api-access-nj4zz\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.324247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.328817 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.328997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc78037-0a58-45a0-9beb-445eb1327707-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.352538 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/1fc78037-0a58-45a0-9beb-445eb1327707-kube-api-access-nj4zz\") pod \"nova-cell1-conductor-0\" (UID: \"1fc78037-0a58-45a0-9beb-445eb1327707\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.399600 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:26 crc kubenswrapper[4780]: I0219 10:42:26.948910 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:42:27 crc kubenswrapper[4780]: I0219 10:42:27.005814 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1fc78037-0a58-45a0-9beb-445eb1327707","Type":"ContainerStarted","Data":"d3fbbb2001aee296531772072b58bd6b79170f96b1c115c337055a9a9ac811e3"} Feb 19 10:42:27 crc kubenswrapper[4780]: I0219 10:42:27.952750 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac835368-9f36-491e-8684-ad0fbd976e4a" path="/var/lib/kubelet/pods/ac835368-9f36-491e-8684-ad0fbd976e4a/volumes" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.018874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1fc78037-0a58-45a0-9beb-445eb1327707","Type":"ContainerStarted","Data":"7b3a7d08a40337221198f17fa40cb6ce23ae13f36fadc9743f616705b63244b8"} Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.048010 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.047981136 podStartE2EDuration="2.047981136s" podCreationTimestamp="2026-02-19 10:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:42:28.041174824 +0000 UTC m=+8490.784832273" watchObservedRunningTime="2026-02-19 10:42:28.047981136 +0000 UTC m=+8490.791638585" Feb 19 10:42:28 crc kubenswrapper[4780]: E0219 10:42:28.447491 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 is running failed: container process not found" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:42:28 crc kubenswrapper[4780]: E0219 10:42:28.447857 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 is running failed: container process not found" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:42:28 crc kubenswrapper[4780]: E0219 10:42:28.448309 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 is running failed: container process not found" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:42:28 crc kubenswrapper[4780]: E0219 10:42:28.448358 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerName="nova-cell0-conductor-conductor" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.775813 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.899272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data\") pod \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.899485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7fgc\" (UniqueName: \"kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc\") pod \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.899637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle\") pod \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\" (UID: \"787ed9b0-4ee5-4eae-bc6c-f465c5655d80\") " Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.915865 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc" (OuterVolumeSpecName: "kube-api-access-r7fgc") pod "787ed9b0-4ee5-4eae-bc6c-f465c5655d80" (UID: "787ed9b0-4ee5-4eae-bc6c-f465c5655d80"). InnerVolumeSpecName "kube-api-access-r7fgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.956307 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data" (OuterVolumeSpecName: "config-data") pod "787ed9b0-4ee5-4eae-bc6c-f465c5655d80" (UID: "787ed9b0-4ee5-4eae-bc6c-f465c5655d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.983659 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "787ed9b0-4ee5-4eae-bc6c-f465c5655d80" (UID: "787ed9b0-4ee5-4eae-bc6c-f465c5655d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:28 crc kubenswrapper[4780]: I0219 10:42:28.999676 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.002696 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.002720 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7fgc\" (UniqueName: \"kubernetes.io/projected/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-kube-api-access-r7fgc\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.002733 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ed9b0-4ee5-4eae-bc6c-f465c5655d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.052402 4780 generic.go:334] "Generic (PLEG): container finished" podID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerID="aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43" exitCode=0 Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.052501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerDied","Data":"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43"} Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.052575 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"946ccfed-bba6-41c7-bc5a-720b819d37c2","Type":"ContainerDied","Data":"79bcfcd47175202f2523e98948d8cfc823b274f46bde1b1297d2f98e802d4046"} Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.052618 4780 scope.go:117] "RemoveContainer" containerID="aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.052884 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.059645 4780 generic.go:334] "Generic (PLEG): container finished" podID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" exitCode=0 Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.059748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"787ed9b0-4ee5-4eae-bc6c-f465c5655d80","Type":"ContainerDied","Data":"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1"} Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.059787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"787ed9b0-4ee5-4eae-bc6c-f465c5655d80","Type":"ContainerDied","Data":"12142ab449d677d14069058e082da5437bc27650e0d26c5645bf62f75797e1fd"} Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.059880 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.071503 4780 generic.go:334] "Generic (PLEG): container finished" podID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerID="77e0ab4bc2048225d987e1e6766923342e5d61ad2f0992703e85fee058c481a3" exitCode=0 Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.071586 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerDied","Data":"77e0ab4bc2048225d987e1e6766923342e5d61ad2f0992703e85fee058c481a3"} Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.072047 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.104174 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data\") pod \"946ccfed-bba6-41c7-bc5a-720b819d37c2\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.104326 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle\") pod \"946ccfed-bba6-41c7-bc5a-720b819d37c2\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.104422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs\") pod \"946ccfed-bba6-41c7-bc5a-720b819d37c2\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.104504 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stnl8\" (UniqueName: \"kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8\") pod \"946ccfed-bba6-41c7-bc5a-720b819d37c2\" (UID: \"946ccfed-bba6-41c7-bc5a-720b819d37c2\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.108495 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs" (OuterVolumeSpecName: "logs") pod "946ccfed-bba6-41c7-bc5a-720b819d37c2" (UID: "946ccfed-bba6-41c7-bc5a-720b819d37c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.111015 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.118561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8" (OuterVolumeSpecName: "kube-api-access-stnl8") pod "946ccfed-bba6-41c7-bc5a-720b819d37c2" (UID: "946ccfed-bba6-41c7-bc5a-720b819d37c2"). InnerVolumeSpecName "kube-api-access-stnl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.129559 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.130883 4780 scope.go:117] "RemoveContainer" containerID="ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.144273 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.144977 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-log" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145009 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-log" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.145027 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerName="nova-cell0-conductor-conductor" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145037 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerName="nova-cell0-conductor-conductor" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.145055 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-metadata" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145062 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-metadata" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145409 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-log" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145499 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" containerName="nova-metadata-metadata" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.145518 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" containerName="nova-cell0-conductor-conductor" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.147447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.152542 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.159567 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.176770 4780 scope.go:117] "RemoveContainer" containerID="aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.177390 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43\": container with ID starting with aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43 not found: ID does not exist" containerID="aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.177419 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43"} err="failed to get container status \"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43\": rpc error: code = NotFound desc = could not find container \"aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43\": container with ID starting with aebe4acc5eb466350bf08c9dc7776a76701b11ca2670433cf146b335c3acba43 not found: ID does not exist" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.177445 4780 scope.go:117] "RemoveContainer" containerID="ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.177682 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144\": container with ID starting with ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144 not found: ID does not exist" containerID="ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.177699 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144"} err="failed to get container status \"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144\": rpc error: code = NotFound desc = could not find container \"ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144\": container with ID starting with ca28630aeb58f155f39c11b0bef15fddac25c649b276798c00d7132c24f56144 not found: ID does not exist" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.177710 4780 scope.go:117] "RemoveContainer" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.182139 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "946ccfed-bba6-41c7-bc5a-720b819d37c2" (UID: "946ccfed-bba6-41c7-bc5a-720b819d37c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.203149 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data" (OuterVolumeSpecName: "config-data") pod "946ccfed-bba6-41c7-bc5a-720b819d37c2" (UID: "946ccfed-bba6-41c7-bc5a-720b819d37c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211089 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/790dc4cb-be5e-435f-b67b-81b27bbe7048-kube-api-access-gzg25\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211280 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211832 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211854 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946ccfed-bba6-41c7-bc5a-720b819d37c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211872 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946ccfed-bba6-41c7-bc5a-720b819d37c2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.211886 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stnl8\" (UniqueName: \"kubernetes.io/projected/946ccfed-bba6-41c7-bc5a-720b819d37c2-kube-api-access-stnl8\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.230084 4780 scope.go:117] "RemoveContainer" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.231518 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1\": container with ID starting with 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 not found: ID does not exist" containerID="026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.231569 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1"} err="failed to get container status \"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1\": rpc error: code = NotFound desc = could not find container \"026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1\": container with ID starting with 026572e12fd61040518d156566c2439be281cc2e5bd16a139066de60b2370cd1 not found: ID does not exist" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.315381 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/790dc4cb-be5e-435f-b67b-81b27bbe7048-kube-api-access-gzg25\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.315505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.315779 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.333225 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.333369 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790dc4cb-be5e-435f-b67b-81b27bbe7048-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.339025 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/790dc4cb-be5e-435f-b67b-81b27bbe7048-kube-api-access-gzg25\") pod \"nova-cell0-conductor-0\" (UID: \"790dc4cb-be5e-435f-b67b-81b27bbe7048\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.465489 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.486602 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.492858 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.501675 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.522186 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle\") pod \"c241731f-b635-4434-97f3-5dd498ef0a3c\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.522277 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data\") pod \"c241731f-b635-4434-97f3-5dd498ef0a3c\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.522416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wtb\" (UniqueName: \"kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb\") pod \"c241731f-b635-4434-97f3-5dd498ef0a3c\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.522449 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs\") pod \"c241731f-b635-4434-97f3-5dd498ef0a3c\" (UID: \"c241731f-b635-4434-97f3-5dd498ef0a3c\") " Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.524475 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs" (OuterVolumeSpecName: "logs") pod "c241731f-b635-4434-97f3-5dd498ef0a3c" (UID: "c241731f-b635-4434-97f3-5dd498ef0a3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.536316 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb" (OuterVolumeSpecName: "kube-api-access-p7wtb") pod "c241731f-b635-4434-97f3-5dd498ef0a3c" (UID: "c241731f-b635-4434-97f3-5dd498ef0a3c"). InnerVolumeSpecName "kube-api-access-p7wtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.562728 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.563537 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-api" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.563568 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-api" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.563612 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-log" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.563621 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-log" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.563909 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-api" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.563930 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" containerName="nova-api-log" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.568294 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.573203 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.598480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data" (OuterVolumeSpecName: "config-data") pod "c241731f-b635-4434-97f3-5dd498ef0a3c" (UID: "c241731f-b635-4434-97f3-5dd498ef0a3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.621493 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625439 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-config-data\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625672 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2047b7bd-ad35-4f35-a73b-1f984bf3891b-logs\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625705 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nps2f\" (UniqueName: \"kubernetes.io/projected/2047b7bd-ad35-4f35-a73b-1f984bf3891b-kube-api-access-nps2f\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625850 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625875 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wtb\" (UniqueName: \"kubernetes.io/projected/c241731f-b635-4434-97f3-5dd498ef0a3c-kube-api-access-p7wtb\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.625887 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c241731f-b635-4434-97f3-5dd498ef0a3c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.668272 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c241731f-b635-4434-97f3-5dd498ef0a3c" (UID: "c241731f-b635-4434-97f3-5dd498ef0a3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.727792 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.727880 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2047b7bd-ad35-4f35-a73b-1f984bf3891b-logs\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.727909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nps2f\" (UniqueName: \"kubernetes.io/projected/2047b7bd-ad35-4f35-a73b-1f984bf3891b-kube-api-access-nps2f\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.727994 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-config-data\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.728117 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c241731f-b635-4434-97f3-5dd498ef0a3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.730443 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2047b7bd-ad35-4f35-a73b-1f984bf3891b-logs\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.733787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-config-data\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.734047 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2047b7bd-ad35-4f35-a73b-1f984bf3891b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.752782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nps2f\" (UniqueName: \"kubernetes.io/projected/2047b7bd-ad35-4f35-a73b-1f984bf3891b-kube-api-access-nps2f\") pod \"nova-metadata-0\" (UID: \"2047b7bd-ad35-4f35-a73b-1f984bf3891b\") " pod="openstack/nova-metadata-0" Feb 19 10:42:29 crc kubenswrapper[4780]: I0219 10:42:29.939055 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:42:29 crc kubenswrapper[4780]: E0219 10:42:29.939565 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:29.996106 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787ed9b0-4ee5-4eae-bc6c-f465c5655d80" path="/var/lib/kubelet/pods/787ed9b0-4ee5-4eae-bc6c-f465c5655d80/volumes" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:29.997503 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946ccfed-bba6-41c7-bc5a-720b819d37c2" path="/var/lib/kubelet/pods/946ccfed-bba6-41c7-bc5a-720b819d37c2/volumes" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.000421 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.006787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.022830 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.029258 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mqq2m" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.029474 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.029479 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.029814 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.031775 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.031824 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.031960 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.038474 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.100963 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.101246 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c241731f-b635-4434-97f3-5dd498ef0a3c","Type":"ContainerDied","Data":"06b42a3ed3c050c7a4248f29ae024d4be2489798558d88a29d0bea6e5a01a8ec"} Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.102384 4780 scope.go:117] "RemoveContainer" containerID="77e0ab4bc2048225d987e1e6766923342e5d61ad2f0992703e85fee058c481a3" Feb 19 10:42:30 crc kubenswrapper[4780]: W0219 10:42:30.107570 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod790dc4cb_be5e_435f_b67b_81b27bbe7048.slice/crio-154607ea23eb77df4dc01a007c6891e6e745fe7fe134b02edfd90e72addf28fc WatchSource:0}: Error finding container 154607ea23eb77df4dc01a007c6891e6e745fe7fe134b02edfd90e72addf28fc: Status 404 returned error can't find the container with id 154607ea23eb77df4dc01a007c6891e6e745fe7fe134b02edfd90e72addf28fc Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.108735 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.137896 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138112 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138562 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138661 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138704 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.138823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrnh\" (UniqueName: \"kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.139806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.158223 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.194977 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.206654 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.209692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.212249 4780 scope.go:117] "RemoveContainer" containerID="eef77213377a0841e0cdfbadd5c5c2cad72d804251ad843f81a3ac8c1eeb9ef5" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.214224 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.222981 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242373 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242416 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242460 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242612 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrnh\" (UniqueName: \"kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242663 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242694 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242739 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b538280-e74c-4c4a-8f3f-9f6d12254a76-logs\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242779 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242826 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242881 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cccd\" (UniqueName: \"kubernetes.io/projected/9b538280-e74c-4c4a-8f3f-9f6d12254a76-kube-api-access-5cccd\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242915 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242955 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-config-data\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.242984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.243010 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.243034 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.243058 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.243094 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.244694 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.245015 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.247659 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.248831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.249865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.251670 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.252173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.252578 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.253036 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.257997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.260856 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.265986 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrnh\" (UniqueName: \"kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.269287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.345166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-config-data\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.345523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.345759 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b538280-e74c-4c4a-8f3f-9f6d12254a76-logs\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.345914 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cccd\" (UniqueName: \"kubernetes.io/projected/9b538280-e74c-4c4a-8f3f-9f6d12254a76-kube-api-access-5cccd\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.352831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-config-data\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.358634 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b538280-e74c-4c4a-8f3f-9f6d12254a76-logs\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.366387 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b538280-e74c-4c4a-8f3f-9f6d12254a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.368821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cccd\" (UniqueName: \"kubernetes.io/projected/9b538280-e74c-4c4a-8f3f-9f6d12254a76-kube-api-access-5cccd\") pod \"nova-api-0\" (UID: \"9b538280-e74c-4c4a-8f3f-9f6d12254a76\") " pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.369103 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.552569 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:42:30 crc kubenswrapper[4780]: I0219 10:42:30.601221 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.130524 4780 generic.go:334] "Generic (PLEG): container finished" podID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" exitCode=0 Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.130985 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96bd89b2-5989-4451-86e2-9a92c57390fa","Type":"ContainerDied","Data":"1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f"} Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.141733 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"790dc4cb-be5e-435f-b67b-81b27bbe7048","Type":"ContainerStarted","Data":"8171dc90d960b22c8d0070fef1329f437c4d8d95745eb693e9e62a2fbadfa9ce"} Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.141788 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"790dc4cb-be5e-435f-b67b-81b27bbe7048","Type":"ContainerStarted","Data":"154607ea23eb77df4dc01a007c6891e6e745fe7fe134b02edfd90e72addf28fc"} Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.142344 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.159197 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd"] Feb 19 10:42:31 crc kubenswrapper[4780]: W0219 10:42:31.166215 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ec28c41_efa8_4b38_8c39_784760e93e05.slice/crio-02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a WatchSource:0}: Error finding container 02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a: Status 404 returned error can't find the container with id 02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.185314 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2047b7bd-ad35-4f35-a73b-1f984bf3891b","Type":"ContainerStarted","Data":"092b37d78560f1ad4252843ff589b0e68d7acd26213d3c628710ba44cd7d204b"} Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.185369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2047b7bd-ad35-4f35-a73b-1f984bf3891b","Type":"ContainerStarted","Data":"f4536e44bf16da458089db1521d9381655f562f793455922f6dae4daae307dcd"} Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.206582 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.206549846 podStartE2EDuration="2.206549846s" podCreationTimestamp="2026-02-19 10:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:42:31.194351457 +0000 UTC m=+8493.938008936" watchObservedRunningTime="2026-02-19 10:42:31.206549846 +0000 UTC m=+8493.950207295" Feb 19 10:42:31 crc kubenswrapper[4780]: E0219 10:42:31.279084 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f is running failed: container process not found" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:31 crc kubenswrapper[4780]: E0219 10:42:31.284003 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f is running failed: container process not found" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:31 crc kubenswrapper[4780]: E0219 10:42:31.284489 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f is running failed: container process not found" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:42:31 crc kubenswrapper[4780]: E0219 10:42:31.284530 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.320466 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.323396 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.387959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn94v\" (UniqueName: \"kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v\") pod \"96bd89b2-5989-4451-86e2-9a92c57390fa\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.388415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle\") pod \"96bd89b2-5989-4451-86e2-9a92c57390fa\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.388542 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data\") pod \"96bd89b2-5989-4451-86e2-9a92c57390fa\" (UID: \"96bd89b2-5989-4451-86e2-9a92c57390fa\") " Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.405521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v" (OuterVolumeSpecName: "kube-api-access-hn94v") pod "96bd89b2-5989-4451-86e2-9a92c57390fa" (UID: "96bd89b2-5989-4451-86e2-9a92c57390fa"). InnerVolumeSpecName "kube-api-access-hn94v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.435734 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data" (OuterVolumeSpecName: "config-data") pod "96bd89b2-5989-4451-86e2-9a92c57390fa" (UID: "96bd89b2-5989-4451-86e2-9a92c57390fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.437074 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96bd89b2-5989-4451-86e2-9a92c57390fa" (UID: "96bd89b2-5989-4451-86e2-9a92c57390fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.493057 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.493156 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd89b2-5989-4451-86e2-9a92c57390fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.493171 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn94v\" (UniqueName: \"kubernetes.io/projected/96bd89b2-5989-4451-86e2-9a92c57390fa-kube-api-access-hn94v\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:31 crc kubenswrapper[4780]: I0219 10:42:31.954625 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c241731f-b635-4434-97f3-5dd498ef0a3c" path="/var/lib/kubelet/pods/c241731f-b635-4434-97f3-5dd498ef0a3c/volumes" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.211872 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2047b7bd-ad35-4f35-a73b-1f984bf3891b","Type":"ContainerStarted","Data":"bc313dd1d560c50a2fcfd8dcedd8420acec8c96da48d470139b1cc75e0b72d5f"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.237488 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" event={"ID":"4ec28c41-efa8-4b38-8c39-784760e93e05","Type":"ContainerStarted","Data":"7a56265a18f7ac535506905c7de1514b9fe8a593e28583263326349c5e94ea0e"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.237826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" event={"ID":"4ec28c41-efa8-4b38-8c39-784760e93e05","Type":"ContainerStarted","Data":"02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.252979 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b538280-e74c-4c4a-8f3f-9f6d12254a76","Type":"ContainerStarted","Data":"135ff5d525db6d9fcb4aea614bbcaae6f3a6f6fbd1a94007881bac5b117cf2b8"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.253045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b538280-e74c-4c4a-8f3f-9f6d12254a76","Type":"ContainerStarted","Data":"d92cd7079af7ab384cb4a5768d94724cf75668ea4c4d14bc7c7b817177b8bcbd"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.253061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b538280-e74c-4c4a-8f3f-9f6d12254a76","Type":"ContainerStarted","Data":"ddf68a5e1d0391e71bb0a788753aeaf6a32fc01f873241d8d05535645cc6150f"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.257964 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.258716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96bd89b2-5989-4451-86e2-9a92c57390fa","Type":"ContainerDied","Data":"d2168c84d72456f785a1fbc66926c20c180011e342f5fe261dc048147d5dc40e"} Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.258775 4780 scope.go:117] "RemoveContainer" containerID="1a8f5bf6a245e38b89cfa0b89d518fb6f0594db46c558ec60f392cd5407e714f" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.259563 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.259544557 podStartE2EDuration="3.259544557s" podCreationTimestamp="2026-02-19 10:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:42:32.238787634 +0000 UTC m=+8494.982445083" watchObservedRunningTime="2026-02-19 10:42:32.259544557 +0000 UTC m=+8495.003202016" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.290549 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" podStartSLOduration=2.763976466 podStartE2EDuration="3.290515112s" podCreationTimestamp="2026-02-19 10:42:29 +0000 UTC" firstStartedPulling="2026-02-19 10:42:31.180313263 +0000 UTC m=+8493.923970712" lastFinishedPulling="2026-02-19 10:42:31.706851909 +0000 UTC m=+8494.450509358" observedRunningTime="2026-02-19 10:42:32.265028377 +0000 UTC m=+8495.008685836" watchObservedRunningTime="2026-02-19 10:42:32.290515112 +0000 UTC m=+8495.034172561" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.378217 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.396264 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.416939 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.416380962 podStartE2EDuration="2.416380962s" podCreationTimestamp="2026-02-19 10:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:42:32.318718672 +0000 UTC m=+8495.062376121" watchObservedRunningTime="2026-02-19 10:42:32.416380962 +0000 UTC m=+8495.160038411" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.467840 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:32 crc kubenswrapper[4780]: E0219 10:42:32.468562 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.468586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.468861 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" containerName="nova-scheduler-scheduler" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.470024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.478626 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.488891 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.558656 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6r2l\" (UniqueName: \"kubernetes.io/projected/87290173-ccab-49e6-8f60-4bfeabd11a37-kube-api-access-z6r2l\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.558886 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-config-data\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.558971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.661716 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6r2l\" (UniqueName: \"kubernetes.io/projected/87290173-ccab-49e6-8f60-4bfeabd11a37-kube-api-access-z6r2l\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.661882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-config-data\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.661937 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.670420 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.671241 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87290173-ccab-49e6-8f60-4bfeabd11a37-config-data\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.689549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6r2l\" (UniqueName: \"kubernetes.io/projected/87290173-ccab-49e6-8f60-4bfeabd11a37-kube-api-access-z6r2l\") pod \"nova-scheduler-0\" (UID: \"87290173-ccab-49e6-8f60-4bfeabd11a37\") " pod="openstack/nova-scheduler-0" Feb 19 10:42:32 crc kubenswrapper[4780]: I0219 10:42:32.812661 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:42:33 crc kubenswrapper[4780]: W0219 10:42:33.374582 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87290173_ccab_49e6_8f60_4bfeabd11a37.slice/crio-2a7c561f7525220ce9554ab01c64e5e04727df1f8515481eefabe970f695defe WatchSource:0}: Error finding container 2a7c561f7525220ce9554ab01c64e5e04727df1f8515481eefabe970f695defe: Status 404 returned error can't find the container with id 2a7c561f7525220ce9554ab01c64e5e04727df1f8515481eefabe970f695defe Feb 19 10:42:33 crc kubenswrapper[4780]: I0219 10:42:33.375870 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:42:33 crc kubenswrapper[4780]: I0219 10:42:33.952856 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bd89b2-5989-4451-86e2-9a92c57390fa" path="/var/lib/kubelet/pods/96bd89b2-5989-4451-86e2-9a92c57390fa/volumes" Feb 19 10:42:34 crc kubenswrapper[4780]: I0219 10:42:34.285045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87290173-ccab-49e6-8f60-4bfeabd11a37","Type":"ContainerStarted","Data":"38d811a24fb6cc52b1dbc8781510e00a76a52590c94d81383c26bd97d96a23bc"} Feb 19 10:42:34 crc kubenswrapper[4780]: I0219 10:42:34.285425 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87290173-ccab-49e6-8f60-4bfeabd11a37","Type":"ContainerStarted","Data":"2a7c561f7525220ce9554ab01c64e5e04727df1f8515481eefabe970f695defe"} Feb 19 10:42:34 crc kubenswrapper[4780]: I0219 10:42:34.309112 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.309085846 podStartE2EDuration="2.309085846s" podCreationTimestamp="2026-02-19 10:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:42:34.30633833 +0000 UTC m=+8497.049995779" watchObservedRunningTime="2026-02-19 10:42:34.309085846 +0000 UTC m=+8497.052743295" Feb 19 10:42:35 crc kubenswrapper[4780]: I0219 10:42:35.010261 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:42:35 crc kubenswrapper[4780]: I0219 10:42:35.010349 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:42:36 crc kubenswrapper[4780]: I0219 10:42:36.437593 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 10:42:37 crc kubenswrapper[4780]: I0219 10:42:37.813914 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:42:39 crc kubenswrapper[4780]: I0219 10:42:39.526672 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 10:42:40 crc kubenswrapper[4780]: I0219 10:42:40.008159 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:42:40 crc kubenswrapper[4780]: I0219 10:42:40.008243 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:42:40 crc kubenswrapper[4780]: I0219 10:42:40.553164 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:42:40 crc kubenswrapper[4780]: I0219 10:42:40.553243 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:42:41 crc kubenswrapper[4780]: I0219 10:42:41.090390 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2047b7bd-ad35-4f35-a73b-1f984bf3891b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.184:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:42:41 crc kubenswrapper[4780]: I0219 10:42:41.090412 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2047b7bd-ad35-4f35-a73b-1f984bf3891b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.184:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:42:41 crc kubenswrapper[4780]: I0219 10:42:41.635427 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b538280-e74c-4c4a-8f3f-9f6d12254a76" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:42:41 crc kubenswrapper[4780]: I0219 10:42:41.635650 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b538280-e74c-4c4a-8f3f-9f6d12254a76" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:42:42 crc kubenswrapper[4780]: I0219 10:42:42.813633 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:42:42 crc kubenswrapper[4780]: I0219 10:42:42.848197 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:42:43 crc kubenswrapper[4780]: I0219 10:42:43.440506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:42:44 crc kubenswrapper[4780]: I0219 10:42:44.939223 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:42:44 crc kubenswrapper[4780]: E0219 10:42:44.939801 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.011023 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.011744 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.013457 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.013893 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.564456 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.565091 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.565280 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:42:50 crc kubenswrapper[4780]: I0219 10:42:50.568603 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:42:51 crc kubenswrapper[4780]: I0219 10:42:51.499724 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:42:51 crc kubenswrapper[4780]: I0219 10:42:51.504359 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:42:55 crc kubenswrapper[4780]: I0219 10:42:55.938930 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:42:55 crc kubenswrapper[4780]: E0219 10:42:55.939919 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:43:07 crc kubenswrapper[4780]: I0219 10:43:07.948614 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:43:07 crc kubenswrapper[4780]: E0219 10:43:07.949579 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:43:19 crc kubenswrapper[4780]: I0219 10:43:19.940527 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:43:19 crc kubenswrapper[4780]: E0219 10:43:19.941893 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:43:33 crc kubenswrapper[4780]: I0219 10:43:33.939623 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:43:33 crc kubenswrapper[4780]: E0219 10:43:33.941070 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:43:45 crc kubenswrapper[4780]: I0219 10:43:45.939008 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:43:45 crc kubenswrapper[4780]: E0219 10:43:45.939945 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.755212 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.758245 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.769585 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.864043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.864145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpzx\" (UniqueName: \"kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.864526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.969982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpzx\" (UniqueName: \"kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.970207 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.970577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.970724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.971052 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:46 crc kubenswrapper[4780]: I0219 10:43:46.999237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpzx\" (UniqueName: \"kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx\") pod \"redhat-marketplace-dnw2t\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:47 crc kubenswrapper[4780]: I0219 10:43:47.082787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:47 crc kubenswrapper[4780]: I0219 10:43:47.799781 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:43:48 crc kubenswrapper[4780]: I0219 10:43:48.186205 4780 generic.go:334] "Generic (PLEG): container finished" podID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerID="de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2" exitCode=0 Feb 19 10:43:48 crc kubenswrapper[4780]: I0219 10:43:48.186254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerDied","Data":"de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2"} Feb 19 10:43:48 crc kubenswrapper[4780]: I0219 10:43:48.186523 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerStarted","Data":"ce2eeb5305dc6b45c25c0094c12b02dabc4568eaf6c539bc4f12a74970f38b99"} Feb 19 10:43:49 crc kubenswrapper[4780]: I0219 10:43:49.200827 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:43:51 crc kubenswrapper[4780]: I0219 10:43:51.230529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerStarted","Data":"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9"} Feb 19 10:43:52 crc kubenswrapper[4780]: I0219 10:43:52.243531 4780 generic.go:334] "Generic (PLEG): container finished" podID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerID="e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9" exitCode=0 Feb 19 10:43:52 crc kubenswrapper[4780]: I0219 10:43:52.243621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerDied","Data":"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9"} Feb 19 10:43:54 crc kubenswrapper[4780]: I0219 10:43:54.272515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerStarted","Data":"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2"} Feb 19 10:43:54 crc kubenswrapper[4780]: I0219 10:43:54.336586 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dnw2t" podStartSLOduration=4.710673854 podStartE2EDuration="8.336559364s" podCreationTimestamp="2026-02-19 10:43:46 +0000 UTC" firstStartedPulling="2026-02-19 10:43:49.200463325 +0000 UTC m=+8571.944120774" lastFinishedPulling="2026-02-19 10:43:52.826348835 +0000 UTC m=+8575.570006284" observedRunningTime="2026-02-19 10:43:54.311510919 +0000 UTC m=+8577.055168378" watchObservedRunningTime="2026-02-19 10:43:54.336559364 +0000 UTC m=+8577.080216813" Feb 19 10:43:57 crc kubenswrapper[4780]: I0219 10:43:57.083165 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:57 crc kubenswrapper[4780]: I0219 10:43:57.083445 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:57 crc kubenswrapper[4780]: I0219 10:43:57.134816 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:43:58 crc kubenswrapper[4780]: I0219 10:43:58.938418 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:43:58 crc kubenswrapper[4780]: E0219 10:43:58.939174 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:44:07 crc kubenswrapper[4780]: I0219 10:44:07.142593 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:44:07 crc kubenswrapper[4780]: I0219 10:44:07.212017 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:44:07 crc kubenswrapper[4780]: I0219 10:44:07.421616 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dnw2t" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="registry-server" containerID="cri-o://6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2" gracePeriod=2 Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.144018 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.321160 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpzx\" (UniqueName: \"kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx\") pod \"e07caa72-4db3-4396-98b1-55e4b1d2600a\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.321455 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities\") pod \"e07caa72-4db3-4396-98b1-55e4b1d2600a\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.321605 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content\") pod \"e07caa72-4db3-4396-98b1-55e4b1d2600a\" (UID: \"e07caa72-4db3-4396-98b1-55e4b1d2600a\") " Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.322511 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities" (OuterVolumeSpecName: "utilities") pod "e07caa72-4db3-4396-98b1-55e4b1d2600a" (UID: "e07caa72-4db3-4396-98b1-55e4b1d2600a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.331222 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx" (OuterVolumeSpecName: "kube-api-access-6bpzx") pod "e07caa72-4db3-4396-98b1-55e4b1d2600a" (UID: "e07caa72-4db3-4396-98b1-55e4b1d2600a"). InnerVolumeSpecName "kube-api-access-6bpzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.347619 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e07caa72-4db3-4396-98b1-55e4b1d2600a" (UID: "e07caa72-4db3-4396-98b1-55e4b1d2600a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.424710 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.424767 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bpzx\" (UniqueName: \"kubernetes.io/projected/e07caa72-4db3-4396-98b1-55e4b1d2600a-kube-api-access-6bpzx\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.424786 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07caa72-4db3-4396-98b1-55e4b1d2600a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.436383 4780 generic.go:334] "Generic (PLEG): container finished" podID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerID="6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2" exitCode=0 Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.436481 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerDied","Data":"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2"} Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.436581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnw2t" event={"ID":"e07caa72-4db3-4396-98b1-55e4b1d2600a","Type":"ContainerDied","Data":"ce2eeb5305dc6b45c25c0094c12b02dabc4568eaf6c539bc4f12a74970f38b99"} Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.436605 4780 scope.go:117] "RemoveContainer" containerID="6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.436521 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnw2t" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.473965 4780 scope.go:117] "RemoveContainer" containerID="e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.484687 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.498929 4780 scope.go:117] "RemoveContainer" containerID="de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.503086 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnw2t"] Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.539308 4780 scope.go:117] "RemoveContainer" containerID="6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2" Feb 19 10:44:08 crc kubenswrapper[4780]: E0219 10:44:08.540140 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2\": container with ID starting with 6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2 not found: ID does not exist" containerID="6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.540192 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2"} err="failed to get container status \"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2\": rpc error: code = NotFound desc = could not find container \"6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2\": container with ID starting with 6d52e40916a0e14308dd9232f799f388a29b97b98ec2117cce944c622af2aae2 not found: ID does not exist" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.540224 4780 scope.go:117] "RemoveContainer" containerID="e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9" Feb 19 10:44:08 crc kubenswrapper[4780]: E0219 10:44:08.540600 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9\": container with ID starting with e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9 not found: ID does not exist" containerID="e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.540626 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9"} err="failed to get container status \"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9\": rpc error: code = NotFound desc = could not find container \"e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9\": container with ID starting with e69b51509e761ef8d78f8373c67c1b431609d7079b5e88b591cecd9b37bbcda9 not found: ID does not exist" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.540645 4780 scope.go:117] "RemoveContainer" containerID="de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2" Feb 19 10:44:08 crc kubenswrapper[4780]: E0219 10:44:08.541467 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2\": container with ID starting with de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2 not found: ID does not exist" containerID="de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2" Feb 19 10:44:08 crc kubenswrapper[4780]: I0219 10:44:08.541505 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2"} err="failed to get container status \"de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2\": rpc error: code = NotFound desc = could not find container \"de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2\": container with ID starting with de8ef24f4518295c5c5e5d3bc1b909cf84be748396e93fdf03755ea6e83fd5f2 not found: ID does not exist" Feb 19 10:44:09 crc kubenswrapper[4780]: I0219 10:44:09.955280 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" path="/var/lib/kubelet/pods/e07caa72-4db3-4396-98b1-55e4b1d2600a/volumes" Feb 19 10:44:12 crc kubenswrapper[4780]: I0219 10:44:12.938605 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:44:12 crc kubenswrapper[4780]: E0219 10:44:12.939323 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:44:27 crc kubenswrapper[4780]: I0219 10:44:27.954208 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:44:27 crc kubenswrapper[4780]: E0219 10:44:27.955654 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.713674 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:31 crc kubenswrapper[4780]: E0219 10:44:31.721814 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="extract-content" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.721855 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="extract-content" Feb 19 10:44:31 crc kubenswrapper[4780]: E0219 10:44:31.721883 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="extract-utilities" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.721891 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="extract-utilities" Feb 19 10:44:31 crc kubenswrapper[4780]: E0219 10:44:31.721946 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="registry-server" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.721954 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="registry-server" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.723495 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07caa72-4db3-4396-98b1-55e4b1d2600a" containerName="registry-server" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.732924 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.765786 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.895183 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.895253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:31 crc kubenswrapper[4780]: I0219 10:44:31.895414 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pklx\" (UniqueName: \"kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.002019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pklx\" (UniqueName: \"kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.002225 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.002248 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.002902 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.003477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.061071 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pklx\" (UniqueName: \"kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx\") pod \"redhat-operators-qd6rq\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.072103 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:32 crc kubenswrapper[4780]: I0219 10:44:32.886682 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:33 crc kubenswrapper[4780]: I0219 10:44:33.759554 4780 generic.go:334] "Generic (PLEG): container finished" podID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerID="d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242" exitCode=0 Feb 19 10:44:33 crc kubenswrapper[4780]: I0219 10:44:33.759673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerDied","Data":"d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242"} Feb 19 10:44:33 crc kubenswrapper[4780]: I0219 10:44:33.759813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerStarted","Data":"1edc932d4df335964b24bba8e5334e6ebcf91de75c96b0dbfb7989491f29d250"} Feb 19 10:44:35 crc kubenswrapper[4780]: I0219 10:44:35.803150 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerStarted","Data":"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c"} Feb 19 10:44:38 crc kubenswrapper[4780]: I0219 10:44:38.845659 4780 generic.go:334] "Generic (PLEG): container finished" podID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerID="0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c" exitCode=0 Feb 19 10:44:38 crc kubenswrapper[4780]: I0219 10:44:38.845774 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerDied","Data":"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c"} Feb 19 10:44:39 crc kubenswrapper[4780]: I0219 10:44:39.863551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerStarted","Data":"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f"} Feb 19 10:44:39 crc kubenswrapper[4780]: I0219 10:44:39.905087 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qd6rq" podStartSLOduration=3.448144301 podStartE2EDuration="8.90505838s" podCreationTimestamp="2026-02-19 10:44:31 +0000 UTC" firstStartedPulling="2026-02-19 10:44:33.764890173 +0000 UTC m=+8616.508547622" lastFinishedPulling="2026-02-19 10:44:39.221804252 +0000 UTC m=+8621.965461701" observedRunningTime="2026-02-19 10:44:39.891254053 +0000 UTC m=+8622.634911512" watchObservedRunningTime="2026-02-19 10:44:39.90505838 +0000 UTC m=+8622.648715829" Feb 19 10:44:39 crc kubenswrapper[4780]: I0219 10:44:39.941303 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:44:39 crc kubenswrapper[4780]: E0219 10:44:39.941608 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:44:42 crc kubenswrapper[4780]: I0219 10:44:42.073541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:42 crc kubenswrapper[4780]: I0219 10:44:42.073901 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:43 crc kubenswrapper[4780]: I0219 10:44:43.132474 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qd6rq" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="registry-server" probeResult="failure" output=< Feb 19 10:44:43 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Feb 19 10:44:43 crc kubenswrapper[4780]: > Feb 19 10:44:52 crc kubenswrapper[4780]: I0219 10:44:52.131707 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:52 crc kubenswrapper[4780]: I0219 10:44:52.200261 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:52 crc kubenswrapper[4780]: I0219 10:44:52.381821 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:53 crc kubenswrapper[4780]: I0219 10:44:53.939336 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:44:53 crc kubenswrapper[4780]: E0219 10:44:53.941045 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.054163 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qd6rq" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="registry-server" containerID="cri-o://9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f" gracePeriod=2 Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.802625 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.825731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities\") pod \"08f32570-8b7e-40ab-85fe-68e945b8130a\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.825832 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content\") pod \"08f32570-8b7e-40ab-85fe-68e945b8130a\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.826001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pklx\" (UniqueName: \"kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx\") pod \"08f32570-8b7e-40ab-85fe-68e945b8130a\" (UID: \"08f32570-8b7e-40ab-85fe-68e945b8130a\") " Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.826981 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities" (OuterVolumeSpecName: "utilities") pod "08f32570-8b7e-40ab-85fe-68e945b8130a" (UID: "08f32570-8b7e-40ab-85fe-68e945b8130a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.835517 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx" (OuterVolumeSpecName: "kube-api-access-7pklx") pod "08f32570-8b7e-40ab-85fe-68e945b8130a" (UID: "08f32570-8b7e-40ab-85fe-68e945b8130a"). InnerVolumeSpecName "kube-api-access-7pklx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.930355 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pklx\" (UniqueName: \"kubernetes.io/projected/08f32570-8b7e-40ab-85fe-68e945b8130a-kube-api-access-7pklx\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:54 crc kubenswrapper[4780]: I0219 10:44:54.930415 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.021019 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08f32570-8b7e-40ab-85fe-68e945b8130a" (UID: "08f32570-8b7e-40ab-85fe-68e945b8130a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.033418 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f32570-8b7e-40ab-85fe-68e945b8130a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.115544 4780 generic.go:334] "Generic (PLEG): container finished" podID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerID="9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f" exitCode=0 Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.115620 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerDied","Data":"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f"} Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.115666 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6rq" event={"ID":"08f32570-8b7e-40ab-85fe-68e945b8130a","Type":"ContainerDied","Data":"1edc932d4df335964b24bba8e5334e6ebcf91de75c96b0dbfb7989491f29d250"} Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.115692 4780 scope.go:117] "RemoveContainer" containerID="9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.115977 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6rq" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.195675 4780 scope.go:117] "RemoveContainer" containerID="0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.204186 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.236610 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qd6rq"] Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.253536 4780 scope.go:117] "RemoveContainer" containerID="d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.317556 4780 scope.go:117] "RemoveContainer" containerID="9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f" Feb 19 10:44:55 crc kubenswrapper[4780]: E0219 10:44:55.318202 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f\": container with ID starting with 9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f not found: ID does not exist" containerID="9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.318266 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f"} err="failed to get container status \"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f\": rpc error: code = NotFound desc = could not find container \"9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f\": container with ID starting with 9aac76a1447bc93a513f97acf16aa081fda8712d3ad312ef467d08fda238322f not found: ID does not exist" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.318313 4780 scope.go:117] "RemoveContainer" containerID="0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c" Feb 19 10:44:55 crc kubenswrapper[4780]: E0219 10:44:55.318880 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c\": container with ID starting with 0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c not found: ID does not exist" containerID="0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.318937 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c"} err="failed to get container status \"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c\": rpc error: code = NotFound desc = could not find container \"0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c\": container with ID starting with 0953cd860992ccffdf70306f4c2eb38adb65b1136aee97d583162bed0a14bb8c not found: ID does not exist" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.318963 4780 scope.go:117] "RemoveContainer" containerID="d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242" Feb 19 10:44:55 crc kubenswrapper[4780]: E0219 10:44:55.319349 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242\": container with ID starting with d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242 not found: ID does not exist" containerID="d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.319411 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242"} err="failed to get container status \"d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242\": rpc error: code = NotFound desc = could not find container \"d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242\": container with ID starting with d852b54a6b2b4cceb63b1b80ecc1c0da8f68db028b575d4b533ce20d2b41b242 not found: ID does not exist" Feb 19 10:44:55 crc kubenswrapper[4780]: I0219 10:44:55.961074 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" path="/var/lib/kubelet/pods/08f32570-8b7e-40ab-85fe-68e945b8130a/volumes" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.175566 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c"] Feb 19 10:45:00 crc kubenswrapper[4780]: E0219 10:45:00.176783 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.176801 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4780]: E0219 10:45:00.176819 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.176825 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4780]: E0219 10:45:00.176840 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.176847 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.177098 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f32570-8b7e-40ab-85fe-68e945b8130a" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.178206 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.184561 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.184612 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.189013 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c"] Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.267406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbn2q\" (UniqueName: \"kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.267554 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.267828 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.370235 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbn2q\" (UniqueName: \"kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.370406 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.370586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.371375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.385872 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.389520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbn2q\" (UniqueName: \"kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q\") pod \"collect-profiles-29524965-lbp2c\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:00 crc kubenswrapper[4780]: I0219 10:45:00.503835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:01 crc kubenswrapper[4780]: I0219 10:45:01.048311 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c"] Feb 19 10:45:01 crc kubenswrapper[4780]: I0219 10:45:01.229528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" event={"ID":"3572fc29-9861-4ca3-b1a1-7440b1f4b43b","Type":"ContainerStarted","Data":"03d6cde3e017b2ba378ae7efdc48eb102307256c651288ad7176a01bd6c57341"} Feb 19 10:45:02 crc kubenswrapper[4780]: I0219 10:45:02.243395 4780 generic.go:334] "Generic (PLEG): container finished" podID="3572fc29-9861-4ca3-b1a1-7440b1f4b43b" containerID="3f2dabc03f4b7baef1b7212755b37dbf16f14dfea4ade6712b1e65d852f82e52" exitCode=0 Feb 19 10:45:02 crc kubenswrapper[4780]: I0219 10:45:02.243524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" event={"ID":"3572fc29-9861-4ca3-b1a1-7440b1f4b43b","Type":"ContainerDied","Data":"3f2dabc03f4b7baef1b7212755b37dbf16f14dfea4ade6712b1e65d852f82e52"} Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.678616 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.882589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume\") pod \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.882806 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume\") pod \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.883183 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbn2q\" (UniqueName: \"kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q\") pod \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\" (UID: \"3572fc29-9861-4ca3-b1a1-7440b1f4b43b\") " Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.883739 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3572fc29-9861-4ca3-b1a1-7440b1f4b43b" (UID: "3572fc29-9861-4ca3-b1a1-7440b1f4b43b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.884706 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.889253 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q" (OuterVolumeSpecName: "kube-api-access-qbn2q") pod "3572fc29-9861-4ca3-b1a1-7440b1f4b43b" (UID: "3572fc29-9861-4ca3-b1a1-7440b1f4b43b"). InnerVolumeSpecName "kube-api-access-qbn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.889663 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3572fc29-9861-4ca3-b1a1-7440b1f4b43b" (UID: "3572fc29-9861-4ca3-b1a1-7440b1f4b43b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.987811 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4780]: I0219 10:45:03.987864 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbn2q\" (UniqueName: \"kubernetes.io/projected/3572fc29-9861-4ca3-b1a1-7440b1f4b43b-kube-api-access-qbn2q\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:04 crc kubenswrapper[4780]: I0219 10:45:04.281731 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" event={"ID":"3572fc29-9861-4ca3-b1a1-7440b1f4b43b","Type":"ContainerDied","Data":"03d6cde3e017b2ba378ae7efdc48eb102307256c651288ad7176a01bd6c57341"} Feb 19 10:45:04 crc kubenswrapper[4780]: I0219 10:45:04.282011 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d6cde3e017b2ba378ae7efdc48eb102307256c651288ad7176a01bd6c57341" Feb 19 10:45:04 crc kubenswrapper[4780]: I0219 10:45:04.281780 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-lbp2c" Feb 19 10:45:04 crc kubenswrapper[4780]: I0219 10:45:04.768728 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2"] Feb 19 10:45:04 crc kubenswrapper[4780]: I0219 10:45:04.785094 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-grkl2"] Feb 19 10:45:05 crc kubenswrapper[4780]: I0219 10:45:05.967185 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a90e65-1710-4860-a8be-c6cbc9423096" path="/var/lib/kubelet/pods/a4a90e65-1710-4860-a8be-c6cbc9423096/volumes" Feb 19 10:45:06 crc kubenswrapper[4780]: I0219 10:45:06.941902 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:45:06 crc kubenswrapper[4780]: E0219 10:45:06.942342 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:45:17 crc kubenswrapper[4780]: I0219 10:45:17.957809 4780 scope.go:117] "RemoveContainer" containerID="96f06b75d9840b18dd2109d1e75629ec9586a7e2ebf246f249aab932a50fc5a8" Feb 19 10:45:17 crc kubenswrapper[4780]: I0219 10:45:17.991933 4780 scope.go:117] "RemoveContainer" containerID="46fec74ff2c618843e01cd047d07d6c9d3bbbd1bb124335990de8a1c4326d74d" Feb 19 10:45:18 crc kubenswrapper[4780]: I0219 10:45:18.051929 4780 scope.go:117] "RemoveContainer" containerID="4101b3e7eef21d6a3b6f52804457890f931791c91656f9d99659197402941cb2" Feb 19 10:45:18 crc kubenswrapper[4780]: I0219 10:45:18.113388 4780 scope.go:117] "RemoveContainer" containerID="0463b6997fc28ef4b9c10eeba41607da39619283b98115a286eb90706b9764e4" Feb 19 10:45:20 crc kubenswrapper[4780]: I0219 10:45:20.939098 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:45:20 crc kubenswrapper[4780]: E0219 10:45:20.940042 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:45:33 crc kubenswrapper[4780]: I0219 10:45:33.940060 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:45:33 crc kubenswrapper[4780]: E0219 10:45:33.941579 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:45:45 crc kubenswrapper[4780]: I0219 10:45:45.938729 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:45:45 crc kubenswrapper[4780]: E0219 10:45:45.940013 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:45:58 crc kubenswrapper[4780]: I0219 10:45:58.938965 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:45:58 crc kubenswrapper[4780]: E0219 10:45:58.940372 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:46:10 crc kubenswrapper[4780]: I0219 10:46:10.041240 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ec28c41-efa8-4b38-8c39-784760e93e05" containerID="7a56265a18f7ac535506905c7de1514b9fe8a593e28583263326349c5e94ea0e" exitCode=0 Feb 19 10:46:10 crc kubenswrapper[4780]: I0219 10:46:10.041333 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" event={"ID":"4ec28c41-efa8-4b38-8c39-784760e93e05","Type":"ContainerDied","Data":"7a56265a18f7ac535506905c7de1514b9fe8a593e28583263326349c5e94ea0e"} Feb 19 10:46:10 crc kubenswrapper[4780]: I0219 10:46:10.938874 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:46:10 crc kubenswrapper[4780]: E0219 10:46:10.939286 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.609619 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.680582 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681069 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrnh\" (UniqueName: \"kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681183 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681261 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681478 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681518 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681610 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681669 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681700 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.681765 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1\") pod \"4ec28c41-efa8-4b38-8c39-784760e93e05\" (UID: \"4ec28c41-efa8-4b38-8c39-784760e93e05\") " Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.688860 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh" (OuterVolumeSpecName: "kube-api-access-nbrnh") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "kube-api-access-nbrnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.690098 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.693231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph" (OuterVolumeSpecName: "ceph") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.723320 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.725222 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.728002 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.730262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.730913 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.733945 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.734296 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.735689 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.746496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory" (OuterVolumeSpecName: "inventory") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.749497 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4ec28c41-efa8-4b38-8c39-784760e93e05" (UID: "4ec28c41-efa8-4b38-8c39-784760e93e05"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785075 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785119 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785157 4780 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785169 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbrnh\" (UniqueName: \"kubernetes.io/projected/4ec28c41-efa8-4b38-8c39-784760e93e05-kube-api-access-nbrnh\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785180 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785189 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785199 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785209 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785217 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785225 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785236 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785246 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:11 crc kubenswrapper[4780]: I0219 10:46:11.785256 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ec28c41-efa8-4b38-8c39-784760e93e05-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:12 crc kubenswrapper[4780]: I0219 10:46:12.069081 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" event={"ID":"4ec28c41-efa8-4b38-8c39-784760e93e05","Type":"ContainerDied","Data":"02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a"} Feb 19 10:46:12 crc kubenswrapper[4780]: I0219 10:46:12.069163 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02306d19e5c6d6953dc02b12f7d32e3c170bf84d466757adf2bfdc7da720932a" Feb 19 10:46:12 crc kubenswrapper[4780]: I0219 10:46:12.069248 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd" Feb 19 10:46:23 crc kubenswrapper[4780]: I0219 10:46:23.938609 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:46:23 crc kubenswrapper[4780]: E0219 10:46:23.939476 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:46:37 crc kubenswrapper[4780]: I0219 10:46:37.950447 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:46:38 crc kubenswrapper[4780]: I0219 10:46:38.373309 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23"} Feb 19 10:46:52 crc kubenswrapper[4780]: E0219 10:46:52.164043 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:51972->38.102.83.103:37621: write tcp 38.102.83.103:51972->38.102.83.103:37621: write: broken pipe Feb 19 10:48:34 crc kubenswrapper[4780]: E0219 10:48:34.627589 4780 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.103:37370->38.102.83.103:37621: read tcp 38.102.83.103:37370->38.102.83.103:37621: read: connection reset by peer Feb 19 10:49:06 crc kubenswrapper[4780]: I0219 10:49:06.335879 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:49:06 crc kubenswrapper[4780]: I0219 10:49:06.336472 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:49:18 crc kubenswrapper[4780]: I0219 10:49:18.502226 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 10:49:18 crc kubenswrapper[4780]: I0219 10:49:18.503350 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="07504bcf-ae99-4ecb-ab72-58864a7b4830" containerName="adoption" containerID="cri-o://e62e1ecd1b1c63142808065018537506c1fe3f300d14c99681ac61d09787506d" gracePeriod=30 Feb 19 10:49:36 crc kubenswrapper[4780]: I0219 10:49:36.336818 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:49:36 crc kubenswrapper[4780]: I0219 10:49:36.337415 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:49:48 crc kubenswrapper[4780]: I0219 10:49:48.743628 4780 generic.go:334] "Generic (PLEG): container finished" podID="07504bcf-ae99-4ecb-ab72-58864a7b4830" containerID="e62e1ecd1b1c63142808065018537506c1fe3f300d14c99681ac61d09787506d" exitCode=137 Feb 19 10:49:48 crc kubenswrapper[4780]: I0219 10:49:48.743704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"07504bcf-ae99-4ecb-ab72-58864a7b4830","Type":"ContainerDied","Data":"e62e1ecd1b1c63142808065018537506c1fe3f300d14c99681ac61d09787506d"} Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.339913 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.487257 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzj27\" (UniqueName: \"kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27\") pod \"07504bcf-ae99-4ecb-ab72-58864a7b4830\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.494019 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") pod \"07504bcf-ae99-4ecb-ab72-58864a7b4830\" (UID: \"07504bcf-ae99-4ecb-ab72-58864a7b4830\") " Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.499318 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27" (OuterVolumeSpecName: "kube-api-access-lzj27") pod "07504bcf-ae99-4ecb-ab72-58864a7b4830" (UID: "07504bcf-ae99-4ecb-ab72-58864a7b4830"). InnerVolumeSpecName "kube-api-access-lzj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.523286 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949" (OuterVolumeSpecName: "mariadb-data") pod "07504bcf-ae99-4ecb-ab72-58864a7b4830" (UID: "07504bcf-ae99-4ecb-ab72-58864a7b4830"). InnerVolumeSpecName "pvc-cf15175e-7972-4ede-bffe-767e83fa5949". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.597976 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") on node \"crc\" " Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.598048 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzj27\" (UniqueName: \"kubernetes.io/projected/07504bcf-ae99-4ecb-ab72-58864a7b4830-kube-api-access-lzj27\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.641414 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.641680 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cf15175e-7972-4ede-bffe-767e83fa5949" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949") on node "crc" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.700397 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-cf15175e-7972-4ede-bffe-767e83fa5949\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf15175e-7972-4ede-bffe-767e83fa5949\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.768606 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"07504bcf-ae99-4ecb-ab72-58864a7b4830","Type":"ContainerDied","Data":"aa8174d48533ce8efcd074f11c460b92b6f1eb5735ee579d987e20a5acfc47d2"} Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.768671 4780 scope.go:117] "RemoveContainer" containerID="e62e1ecd1b1c63142808065018537506c1fe3f300d14c99681ac61d09787506d" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.768838 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.834830 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.850638 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 10:49:49 crc kubenswrapper[4780]: I0219 10:49:49.959979 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07504bcf-ae99-4ecb-ab72-58864a7b4830" path="/var/lib/kubelet/pods/07504bcf-ae99-4ecb-ab72-58864a7b4830/volumes" Feb 19 10:49:50 crc kubenswrapper[4780]: I0219 10:49:50.351404 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 10:49:50 crc kubenswrapper[4780]: I0219 10:49:50.352444 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="beabdc62-9466-4b98-8f24-a68fefea15ee" containerName="adoption" containerID="cri-o://2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14" gracePeriod=30 Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.336432 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.336999 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.337065 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.338320 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.338392 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23" gracePeriod=600 Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.987000 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23" exitCode=0 Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.987100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23"} Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.987369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c"} Feb 19 10:50:06 crc kubenswrapper[4780]: I0219 10:50:06.987402 4780 scope.go:117] "RemoveContainer" containerID="eb68a02b7881dfcd39596f2f7e4b111e36bdfd213f60a2e6f41d277039775d8e" Feb 19 10:50:20 crc kubenswrapper[4780]: I0219 10:50:20.959787 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.078249 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") pod \"beabdc62-9466-4b98-8f24-a68fefea15ee\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.078473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4h2\" (UniqueName: \"kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2\") pod \"beabdc62-9466-4b98-8f24-a68fefea15ee\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.078553 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert\") pod \"beabdc62-9466-4b98-8f24-a68fefea15ee\" (UID: \"beabdc62-9466-4b98-8f24-a68fefea15ee\") " Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.089377 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2" (OuterVolumeSpecName: "kube-api-access-sg4h2") pod "beabdc62-9466-4b98-8f24-a68fefea15ee" (UID: "beabdc62-9466-4b98-8f24-a68fefea15ee"). InnerVolumeSpecName "kube-api-access-sg4h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.094846 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "beabdc62-9466-4b98-8f24-a68fefea15ee" (UID: "beabdc62-9466-4b98-8f24-a68fefea15ee"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.184308 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/beabdc62-9466-4b98-8f24-a68fefea15ee-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.184343 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4h2\" (UniqueName: \"kubernetes.io/projected/beabdc62-9466-4b98-8f24-a68fefea15ee-kube-api-access-sg4h2\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.208009 4780 generic.go:334] "Generic (PLEG): container finished" podID="beabdc62-9466-4b98-8f24-a68fefea15ee" containerID="2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14" exitCode=137 Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.208062 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"beabdc62-9466-4b98-8f24-a68fefea15ee","Type":"ContainerDied","Data":"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14"} Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.208099 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"beabdc62-9466-4b98-8f24-a68fefea15ee","Type":"ContainerDied","Data":"114c851f0e56e6a3a58cf165a809856d194ce4b4322a807a6dabe63c0c252278"} Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.208120 4780 scope.go:117] "RemoveContainer" containerID="2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.208418 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.234662 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261" (OuterVolumeSpecName: "ovn-data") pod "beabdc62-9466-4b98-8f24-a68fefea15ee" (UID: "beabdc62-9466-4b98-8f24-a68fefea15ee"). InnerVolumeSpecName "pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.244544 4780 scope.go:117] "RemoveContainer" containerID="2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14" Feb 19 10:50:21 crc kubenswrapper[4780]: E0219 10:50:21.245244 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14\": container with ID starting with 2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14 not found: ID does not exist" containerID="2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.245293 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14"} err="failed to get container status \"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14\": rpc error: code = NotFound desc = could not find container \"2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14\": container with ID starting with 2cda7b84657375e30e91e2c87008c525bc22a90222fb21175806fad47970ba14 not found: ID does not exist" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.286607 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") on node \"crc\" " Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.342934 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.343445 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261") on node "crc" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.389369 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84cb9170-4a92-4108-9c79-4d5da7cd4261\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.551256 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.563856 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 10:50:21 crc kubenswrapper[4780]: I0219 10:50:21.955217 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beabdc62-9466-4b98-8f24-a68fefea15ee" path="/var/lib/kubelet/pods/beabdc62-9466-4b98-8f24-a68fefea15ee/volumes" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.815914 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:34 crc kubenswrapper[4780]: E0219 10:50:34.817834 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07504bcf-ae99-4ecb-ab72-58864a7b4830" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.817856 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="07504bcf-ae99-4ecb-ab72-58864a7b4830" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: E0219 10:50:34.817924 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3572fc29-9861-4ca3-b1a1-7440b1f4b43b" containerName="collect-profiles" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.817934 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3572fc29-9861-4ca3-b1a1-7440b1f4b43b" containerName="collect-profiles" Feb 19 10:50:34 crc kubenswrapper[4780]: E0219 10:50:34.817945 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec28c41-efa8-4b38-8c39-784760e93e05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.817956 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec28c41-efa8-4b38-8c39-784760e93e05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 10:50:34 crc kubenswrapper[4780]: E0219 10:50:34.817979 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beabdc62-9466-4b98-8f24-a68fefea15ee" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.817986 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="beabdc62-9466-4b98-8f24-a68fefea15ee" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.818293 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="07504bcf-ae99-4ecb-ab72-58864a7b4830" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.818332 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3572fc29-9861-4ca3-b1a1-7440b1f4b43b" containerName="collect-profiles" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.818350 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="beabdc62-9466-4b98-8f24-a68fefea15ee" containerName="adoption" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.818370 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec28c41-efa8-4b38-8c39-784760e93e05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.820540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.828862 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.953973 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.954536 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:34 crc kubenswrapper[4780]: I0219 10:50:34.954763 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5sd\" (UniqueName: \"kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.004486 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.008779 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.055299 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.057550 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.057745 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.057854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5sd\" (UniqueName: \"kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.058531 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.058732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.086689 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5sd\" (UniqueName: \"kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd\") pod \"certified-operators-q69zh\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.152078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.163497 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.163629 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzdx\" (UniqueName: \"kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.163726 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.266052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.266449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzdx\" (UniqueName: \"kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.266576 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.266760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.267204 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.292775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzdx\" (UniqueName: \"kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx\") pod \"community-operators-lpbhk\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:35 crc kubenswrapper[4780]: I0219 10:50:35.342509 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.040861 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.238825 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.434653 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerStarted","Data":"7f4f7b6804f124909d36821b59a9ef1ab3e9ad4e9bcd4fc1d005014ce4c35294"} Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.437732 4780 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ecbf-1388-466f-8510-367441c99f15" containerID="f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001" exitCode=0 Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.437787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerDied","Data":"f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001"} Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.437990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerStarted","Data":"56ec10724fe59bf0d5fd5bc766ee96fb65f02cb029a6d11cf9c7b515acf1f0d1"} Feb 19 10:50:36 crc kubenswrapper[4780]: I0219 10:50:36.443708 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:50:37 crc kubenswrapper[4780]: I0219 10:50:37.450872 4780 generic.go:334] "Generic (PLEG): container finished" podID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerID="479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6" exitCode=0 Feb 19 10:50:37 crc kubenswrapper[4780]: I0219 10:50:37.450941 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerDied","Data":"479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6"} Feb 19 10:50:37 crc kubenswrapper[4780]: I0219 10:50:37.457537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerStarted","Data":"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b"} Feb 19 10:50:38 crc kubenswrapper[4780]: I0219 10:50:38.492524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerStarted","Data":"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc"} Feb 19 10:50:38 crc kubenswrapper[4780]: I0219 10:50:38.505428 4780 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ecbf-1388-466f-8510-367441c99f15" containerID="a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b" exitCode=0 Feb 19 10:50:38 crc kubenswrapper[4780]: I0219 10:50:38.505615 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerDied","Data":"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b"} Feb 19 10:50:39 crc kubenswrapper[4780]: I0219 10:50:39.519178 4780 generic.go:334] "Generic (PLEG): container finished" podID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerID="8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc" exitCode=0 Feb 19 10:50:39 crc kubenswrapper[4780]: I0219 10:50:39.520368 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerDied","Data":"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc"} Feb 19 10:50:39 crc kubenswrapper[4780]: I0219 10:50:39.526730 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerStarted","Data":"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60"} Feb 19 10:50:40 crc kubenswrapper[4780]: I0219 10:50:40.540815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerStarted","Data":"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9"} Feb 19 10:50:40 crc kubenswrapper[4780]: I0219 10:50:40.582284 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q69zh" podStartSLOduration=4.03995069 podStartE2EDuration="6.582255861s" podCreationTimestamp="2026-02-19 10:50:34 +0000 UTC" firstStartedPulling="2026-02-19 10:50:36.443373743 +0000 UTC m=+8979.187031192" lastFinishedPulling="2026-02-19 10:50:38.985678914 +0000 UTC m=+8981.729336363" observedRunningTime="2026-02-19 10:50:39.576532862 +0000 UTC m=+8982.320190311" watchObservedRunningTime="2026-02-19 10:50:40.582255861 +0000 UTC m=+8983.325913300" Feb 19 10:50:40 crc kubenswrapper[4780]: I0219 10:50:40.584703 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lpbhk" podStartSLOduration=4.116375457 podStartE2EDuration="6.584691502s" podCreationTimestamp="2026-02-19 10:50:34 +0000 UTC" firstStartedPulling="2026-02-19 10:50:37.453291597 +0000 UTC m=+8980.196949046" lastFinishedPulling="2026-02-19 10:50:39.921607642 +0000 UTC m=+8982.665265091" observedRunningTime="2026-02-19 10:50:40.567110711 +0000 UTC m=+8983.310768160" watchObservedRunningTime="2026-02-19 10:50:40.584691502 +0000 UTC m=+8983.328348951" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.152549 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.153205 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.208743 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.343410 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.343491 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.399663 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.646942 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:45 crc kubenswrapper[4780]: I0219 10:50:45.651540 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:47 crc kubenswrapper[4780]: I0219 10:50:47.062411 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.065239 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.065820 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lpbhk" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="registry-server" containerID="cri-o://16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9" gracePeriod=2 Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.583630 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.639641 4780 generic.go:334] "Generic (PLEG): container finished" podID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerID="16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9" exitCode=0 Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.639716 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpbhk" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.639723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerDied","Data":"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9"} Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.639789 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpbhk" event={"ID":"d3eb2a73-6160-4f02-b843-acbf6699515c","Type":"ContainerDied","Data":"7f4f7b6804f124909d36821b59a9ef1ab3e9ad4e9bcd4fc1d005014ce4c35294"} Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.639808 4780 scope.go:117] "RemoveContainer" containerID="16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.640258 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q69zh" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="registry-server" containerID="cri-o://febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60" gracePeriod=2 Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.661251 4780 scope.go:117] "RemoveContainer" containerID="8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.683497 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities\") pod \"d3eb2a73-6160-4f02-b843-acbf6699515c\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.683579 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlzdx\" (UniqueName: \"kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx\") pod \"d3eb2a73-6160-4f02-b843-acbf6699515c\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.683947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content\") pod \"d3eb2a73-6160-4f02-b843-acbf6699515c\" (UID: \"d3eb2a73-6160-4f02-b843-acbf6699515c\") " Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.690039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities" (OuterVolumeSpecName: "utilities") pod "d3eb2a73-6160-4f02-b843-acbf6699515c" (UID: "d3eb2a73-6160-4f02-b843-acbf6699515c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.694731 4780 scope.go:117] "RemoveContainer" containerID="479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.695967 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx" (OuterVolumeSpecName: "kube-api-access-rlzdx") pod "d3eb2a73-6160-4f02-b843-acbf6699515c" (UID: "d3eb2a73-6160-4f02-b843-acbf6699515c"). InnerVolumeSpecName "kube-api-access-rlzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.749039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3eb2a73-6160-4f02-b843-acbf6699515c" (UID: "d3eb2a73-6160-4f02-b843-acbf6699515c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.787560 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.787606 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlzdx\" (UniqueName: \"kubernetes.io/projected/d3eb2a73-6160-4f02-b843-acbf6699515c-kube-api-access-rlzdx\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.787622 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3eb2a73-6160-4f02-b843-acbf6699515c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.862519 4780 scope.go:117] "RemoveContainer" containerID="16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9" Feb 19 10:50:48 crc kubenswrapper[4780]: E0219 10:50:48.863448 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9\": container with ID starting with 16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9 not found: ID does not exist" containerID="16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.863516 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9"} err="failed to get container status \"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9\": rpc error: code = NotFound desc = could not find container \"16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9\": container with ID starting with 16a9389e65a2b8571896a22c8d4053e93dac81fc74fcb24188b751bb29069fe9 not found: ID does not exist" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.863564 4780 scope.go:117] "RemoveContainer" containerID="8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc" Feb 19 10:50:48 crc kubenswrapper[4780]: E0219 10:50:48.863963 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc\": container with ID starting with 8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc not found: ID does not exist" containerID="8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.864011 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc"} err="failed to get container status \"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc\": rpc error: code = NotFound desc = could not find container \"8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc\": container with ID starting with 8bd4ca4fd9e2ca0db0154a6b1dfc6c7c9f1542de1aa57ca894693c4c9806fafc not found: ID does not exist" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.864038 4780 scope.go:117] "RemoveContainer" containerID="479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6" Feb 19 10:50:48 crc kubenswrapper[4780]: E0219 10:50:48.864357 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6\": container with ID starting with 479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6 not found: ID does not exist" containerID="479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6" Feb 19 10:50:48 crc kubenswrapper[4780]: I0219 10:50:48.864396 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6"} err="failed to get container status \"479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6\": rpc error: code = NotFound desc = could not find container \"479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6\": container with ID starting with 479d1219c3dd0593012fd3ead47225f3c08db81e6faed662e4327d5688b848e6 not found: ID does not exist" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.006011 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.021447 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lpbhk"] Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.057249 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.201269 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5sd\" (UniqueName: \"kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd\") pod \"ffa1ecbf-1388-466f-8510-367441c99f15\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.201611 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content\") pod \"ffa1ecbf-1388-466f-8510-367441c99f15\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.201777 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities\") pod \"ffa1ecbf-1388-466f-8510-367441c99f15\" (UID: \"ffa1ecbf-1388-466f-8510-367441c99f15\") " Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.202844 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities" (OuterVolumeSpecName: "utilities") pod "ffa1ecbf-1388-466f-8510-367441c99f15" (UID: "ffa1ecbf-1388-466f-8510-367441c99f15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.208480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd" (OuterVolumeSpecName: "kube-api-access-kt5sd") pod "ffa1ecbf-1388-466f-8510-367441c99f15" (UID: "ffa1ecbf-1388-466f-8510-367441c99f15"). InnerVolumeSpecName "kube-api-access-kt5sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.307547 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.307603 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5sd\" (UniqueName: \"kubernetes.io/projected/ffa1ecbf-1388-466f-8510-367441c99f15-kube-api-access-kt5sd\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.313880 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffa1ecbf-1388-466f-8510-367441c99f15" (UID: "ffa1ecbf-1388-466f-8510-367441c99f15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.410021 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ecbf-1388-466f-8510-367441c99f15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.655887 4780 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ecbf-1388-466f-8510-367441c99f15" containerID="febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60" exitCode=0 Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.655939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerDied","Data":"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60"} Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.655984 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q69zh" event={"ID":"ffa1ecbf-1388-466f-8510-367441c99f15","Type":"ContainerDied","Data":"56ec10724fe59bf0d5fd5bc766ee96fb65f02cb029a6d11cf9c7b515acf1f0d1"} Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.655985 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q69zh" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.656005 4780 scope.go:117] "RemoveContainer" containerID="febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.689359 4780 scope.go:117] "RemoveContainer" containerID="a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.722310 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.730630 4780 scope.go:117] "RemoveContainer" containerID="f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.734619 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q69zh"] Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.771723 4780 scope.go:117] "RemoveContainer" containerID="febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60" Feb 19 10:50:49 crc kubenswrapper[4780]: E0219 10:50:49.772537 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60\": container with ID starting with febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60 not found: ID does not exist" containerID="febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.772592 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60"} err="failed to get container status \"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60\": rpc error: code = NotFound desc = could not find container \"febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60\": container with ID starting with febca12e5372078f59b114b74065b4330821d7f3127760a5f1febe447f4a8d60 not found: ID does not exist" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.772627 4780 scope.go:117] "RemoveContainer" containerID="a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b" Feb 19 10:50:49 crc kubenswrapper[4780]: E0219 10:50:49.773163 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b\": container with ID starting with a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b not found: ID does not exist" containerID="a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.773186 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b"} err="failed to get container status \"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b\": rpc error: code = NotFound desc = could not find container \"a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b\": container with ID starting with a226cb78f3655311b91057e8cac022bbcdaa36f02d889e26b089d4d93fb3243b not found: ID does not exist" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.773201 4780 scope.go:117] "RemoveContainer" containerID="f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001" Feb 19 10:50:49 crc kubenswrapper[4780]: E0219 10:50:49.773423 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001\": container with ID starting with f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001 not found: ID does not exist" containerID="f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.773440 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001"} err="failed to get container status \"f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001\": rpc error: code = NotFound desc = could not find container \"f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001\": container with ID starting with f908fb1fadb9a298542daf721390f2ff069eb9c1bb68c2d70ff8e8a6e43ca001 not found: ID does not exist" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.954771 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" path="/var/lib/kubelet/pods/d3eb2a73-6160-4f02-b843-acbf6699515c/volumes" Feb 19 10:50:49 crc kubenswrapper[4780]: I0219 10:50:49.955833 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" path="/var/lib/kubelet/pods/ffa1ecbf-1388-466f-8510-367441c99f15/volumes" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.253225 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bprf7/must-gather-s9cd6"] Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254169 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="extract-content" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254184 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="extract-content" Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254211 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="extract-content" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254217 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="extract-content" Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254239 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254246 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254261 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254268 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254283 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="extract-utilities" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254290 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="extract-utilities" Feb 19 10:51:25 crc kubenswrapper[4780]: E0219 10:51:25.254310 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="extract-utilities" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254317 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="extract-utilities" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254511 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa1ecbf-1388-466f-8510-367441c99f15" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.254532 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3eb2a73-6160-4f02-b843-acbf6699515c" containerName="registry-server" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.255875 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.260911 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bprf7"/"openshift-service-ca.crt" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.262053 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bprf7"/"default-dockercfg-6rtrr" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.262410 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bprf7"/"kube-root-ca.crt" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.277808 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bprf7/must-gather-s9cd6"] Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.328054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.328529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4kf\" (UniqueName: \"kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.442784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.442989 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4kf\" (UniqueName: \"kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.443252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.470489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4kf\" (UniqueName: \"kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf\") pod \"must-gather-s9cd6\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:25 crc kubenswrapper[4780]: I0219 10:51:25.580842 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 10:51:26 crc kubenswrapper[4780]: I0219 10:51:26.114595 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bprf7/must-gather-s9cd6"] Feb 19 10:51:27 crc kubenswrapper[4780]: I0219 10:51:27.089551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/must-gather-s9cd6" event={"ID":"a5e9beab-82b3-4cde-af54-4e6db119bf2c","Type":"ContainerStarted","Data":"61fd3d95333441b60fbe174473f0d613855fa571d095159a0f58565e2798013d"} Feb 19 10:51:35 crc kubenswrapper[4780]: I0219 10:51:35.186098 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/must-gather-s9cd6" event={"ID":"a5e9beab-82b3-4cde-af54-4e6db119bf2c","Type":"ContainerStarted","Data":"ea82b93a172f70eb31928fe8a9c77599f69cabff3ef3a249a60ae0c0321046bd"} Feb 19 10:51:35 crc kubenswrapper[4780]: I0219 10:51:35.186731 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/must-gather-s9cd6" event={"ID":"a5e9beab-82b3-4cde-af54-4e6db119bf2c","Type":"ContainerStarted","Data":"4f1a22a576bcaa5a70f07368bf1dcd1930a11a79b28ac86d10ab484fd3898819"} Feb 19 10:51:35 crc kubenswrapper[4780]: I0219 10:51:35.204853 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bprf7/must-gather-s9cd6" podStartSLOduration=1.906848712 podStartE2EDuration="10.204816407s" podCreationTimestamp="2026-02-19 10:51:25 +0000 UTC" firstStartedPulling="2026-02-19 10:51:26.123201126 +0000 UTC m=+9028.866858575" lastFinishedPulling="2026-02-19 10:51:34.421168821 +0000 UTC m=+9037.164826270" observedRunningTime="2026-02-19 10:51:35.203539745 +0000 UTC m=+9037.947197194" watchObservedRunningTime="2026-02-19 10:51:35.204816407 +0000 UTC m=+9037.948473856" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.706391 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bprf7/crc-debug-cmz7s"] Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.709171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.885608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.886262 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgdd\" (UniqueName: \"kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.988940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgdd\" (UniqueName: \"kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.989059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:39 crc kubenswrapper[4780]: I0219 10:51:39.989267 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:40 crc kubenswrapper[4780]: I0219 10:51:40.021239 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgdd\" (UniqueName: \"kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd\") pod \"crc-debug-cmz7s\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:40 crc kubenswrapper[4780]: I0219 10:51:40.032840 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:51:40 crc kubenswrapper[4780]: I0219 10:51:40.237877 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" event={"ID":"db960b85-24b7-432f-ae37-2f1a77afa3ef","Type":"ContainerStarted","Data":"133e5c6510b0626daa987667c342ab647bf012d2ce73f15f96508d542d6cf9a0"} Feb 19 10:51:55 crc kubenswrapper[4780]: I0219 10:51:55.405817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" event={"ID":"db960b85-24b7-432f-ae37-2f1a77afa3ef","Type":"ContainerStarted","Data":"639f62ec11c606e3a260facba65c1821ccd953b323c31c300713e795137d4657"} Feb 19 10:51:55 crc kubenswrapper[4780]: I0219 10:51:55.424590 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" podStartSLOduration=2.182593695 podStartE2EDuration="16.424571333s" podCreationTimestamp="2026-02-19 10:51:39 +0000 UTC" firstStartedPulling="2026-02-19 10:51:40.078052306 +0000 UTC m=+9042.821709755" lastFinishedPulling="2026-02-19 10:51:54.320029944 +0000 UTC m=+9057.063687393" observedRunningTime="2026-02-19 10:51:55.4216632 +0000 UTC m=+9058.165320649" watchObservedRunningTime="2026-02-19 10:51:55.424571333 +0000 UTC m=+9058.168228782" Feb 19 10:52:06 crc kubenswrapper[4780]: I0219 10:52:06.336299 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:52:06 crc kubenswrapper[4780]: I0219 10:52:06.338540 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:52:23 crc kubenswrapper[4780]: I0219 10:52:23.723763 4780 generic.go:334] "Generic (PLEG): container finished" podID="db960b85-24b7-432f-ae37-2f1a77afa3ef" containerID="639f62ec11c606e3a260facba65c1821ccd953b323c31c300713e795137d4657" exitCode=0 Feb 19 10:52:23 crc kubenswrapper[4780]: I0219 10:52:23.723866 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" event={"ID":"db960b85-24b7-432f-ae37-2f1a77afa3ef","Type":"ContainerDied","Data":"639f62ec11c606e3a260facba65c1821ccd953b323c31c300713e795137d4657"} Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.866890 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.934765 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-cmz7s"] Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.936695 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host\") pod \"db960b85-24b7-432f-ae37-2f1a77afa3ef\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.936915 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgdd\" (UniqueName: \"kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd\") pod \"db960b85-24b7-432f-ae37-2f1a77afa3ef\" (UID: \"db960b85-24b7-432f-ae37-2f1a77afa3ef\") " Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.938149 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host" (OuterVolumeSpecName: "host") pod "db960b85-24b7-432f-ae37-2f1a77afa3ef" (UID: "db960b85-24b7-432f-ae37-2f1a77afa3ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.942424 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-cmz7s"] Feb 19 10:52:24 crc kubenswrapper[4780]: I0219 10:52:24.965968 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd" (OuterVolumeSpecName: "kube-api-access-2bgdd") pod "db960b85-24b7-432f-ae37-2f1a77afa3ef" (UID: "db960b85-24b7-432f-ae37-2f1a77afa3ef"). InnerVolumeSpecName "kube-api-access-2bgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:25 crc kubenswrapper[4780]: I0219 10:52:25.038760 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db960b85-24b7-432f-ae37-2f1a77afa3ef-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:25 crc kubenswrapper[4780]: I0219 10:52:25.038800 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bgdd\" (UniqueName: \"kubernetes.io/projected/db960b85-24b7-432f-ae37-2f1a77afa3ef-kube-api-access-2bgdd\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:25 crc kubenswrapper[4780]: I0219 10:52:25.746154 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133e5c6510b0626daa987667c342ab647bf012d2ce73f15f96508d542d6cf9a0" Feb 19 10:52:25 crc kubenswrapper[4780]: I0219 10:52:25.746288 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-cmz7s" Feb 19 10:52:25 crc kubenswrapper[4780]: I0219 10:52:25.957066 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db960b85-24b7-432f-ae37-2f1a77afa3ef" path="/var/lib/kubelet/pods/db960b85-24b7-432f-ae37-2f1a77afa3ef/volumes" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.152308 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bprf7/crc-debug-t66nq"] Feb 19 10:52:26 crc kubenswrapper[4780]: E0219 10:52:26.153016 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db960b85-24b7-432f-ae37-2f1a77afa3ef" containerName="container-00" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.153041 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db960b85-24b7-432f-ae37-2f1a77afa3ef" containerName="container-00" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.153336 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="db960b85-24b7-432f-ae37-2f1a77afa3ef" containerName="container-00" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.154489 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.269281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88s4b\" (UniqueName: \"kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.269791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.372738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88s4b\" (UniqueName: \"kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.372934 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.373158 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.403549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88s4b\" (UniqueName: \"kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b\") pod \"crc-debug-t66nq\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.477799 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:26 crc kubenswrapper[4780]: I0219 10:52:26.762097 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-t66nq" event={"ID":"141d6251-e3e5-4099-b34e-97787af389ed","Type":"ContainerStarted","Data":"2f11b8e09b2c304aa3379a333b5c8814a44297dc0e536f8dd45be26b90048341"} Feb 19 10:52:27 crc kubenswrapper[4780]: I0219 10:52:27.776530 4780 generic.go:334] "Generic (PLEG): container finished" podID="141d6251-e3e5-4099-b34e-97787af389ed" containerID="3d19e44eae34b58d3a1c8821cf1d93d2521bd0252828af655405924335eff490" exitCode=0 Feb 19 10:52:27 crc kubenswrapper[4780]: I0219 10:52:27.776635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-t66nq" event={"ID":"141d6251-e3e5-4099-b34e-97787af389ed","Type":"ContainerDied","Data":"3d19e44eae34b58d3a1c8821cf1d93d2521bd0252828af655405924335eff490"} Feb 19 10:52:27 crc kubenswrapper[4780]: I0219 10:52:27.959544 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-t66nq"] Feb 19 10:52:27 crc kubenswrapper[4780]: I0219 10:52:27.974808 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-t66nq"] Feb 19 10:52:28 crc kubenswrapper[4780]: I0219 10:52:28.907450 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.040022 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host\") pod \"141d6251-e3e5-4099-b34e-97787af389ed\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.040153 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88s4b\" (UniqueName: \"kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b\") pod \"141d6251-e3e5-4099-b34e-97787af389ed\" (UID: \"141d6251-e3e5-4099-b34e-97787af389ed\") " Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.045071 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host" (OuterVolumeSpecName: "host") pod "141d6251-e3e5-4099-b34e-97787af389ed" (UID: "141d6251-e3e5-4099-b34e-97787af389ed"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.048224 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b" (OuterVolumeSpecName: "kube-api-access-88s4b") pod "141d6251-e3e5-4099-b34e-97787af389ed" (UID: "141d6251-e3e5-4099-b34e-97787af389ed"). InnerVolumeSpecName "kube-api-access-88s4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.144943 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/141d6251-e3e5-4099-b34e-97787af389ed-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.146181 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88s4b\" (UniqueName: \"kubernetes.io/projected/141d6251-e3e5-4099-b34e-97787af389ed-kube-api-access-88s4b\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.291651 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bprf7/crc-debug-jfdp9"] Feb 19 10:52:29 crc kubenswrapper[4780]: E0219 10:52:29.292720 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141d6251-e3e5-4099-b34e-97787af389ed" containerName="container-00" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.292747 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="141d6251-e3e5-4099-b34e-97787af389ed" containerName="container-00" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.293162 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="141d6251-e3e5-4099-b34e-97787af389ed" containerName="container-00" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.294972 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.352373 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26lt7\" (UniqueName: \"kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.352614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.456111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26lt7\" (UniqueName: \"kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.456273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.456488 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.484999 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26lt7\" (UniqueName: \"kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7\") pod \"crc-debug-jfdp9\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.616586 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:29 crc kubenswrapper[4780]: W0219 10:52:29.647673 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78020f93_a0a2_4fea_be11_dec460f6b64a.slice/crio-bbedcb44d4b183aa56fced36f0a1b7d480cea3f59e77a5c5c618f3b77523d823 WatchSource:0}: Error finding container bbedcb44d4b183aa56fced36f0a1b7d480cea3f59e77a5c5c618f3b77523d823: Status 404 returned error can't find the container with id bbedcb44d4b183aa56fced36f0a1b7d480cea3f59e77a5c5c618f3b77523d823 Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.801532 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" event={"ID":"78020f93-a0a2-4fea-be11-dec460f6b64a","Type":"ContainerStarted","Data":"bbedcb44d4b183aa56fced36f0a1b7d480cea3f59e77a5c5c618f3b77523d823"} Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.804282 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f11b8e09b2c304aa3379a333b5c8814a44297dc0e536f8dd45be26b90048341" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.804386 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-t66nq" Feb 19 10:52:29 crc kubenswrapper[4780]: I0219 10:52:29.962863 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141d6251-e3e5-4099-b34e-97787af389ed" path="/var/lib/kubelet/pods/141d6251-e3e5-4099-b34e-97787af389ed/volumes" Feb 19 10:52:30 crc kubenswrapper[4780]: I0219 10:52:30.817885 4780 generic.go:334] "Generic (PLEG): container finished" podID="78020f93-a0a2-4fea-be11-dec460f6b64a" containerID="15abbbbc4e2c71c66fcc7084ea81b3ba9fd9ad43f6c8d49996aa8a87136a0f4f" exitCode=0 Feb 19 10:52:30 crc kubenswrapper[4780]: I0219 10:52:30.817948 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" event={"ID":"78020f93-a0a2-4fea-be11-dec460f6b64a","Type":"ContainerDied","Data":"15abbbbc4e2c71c66fcc7084ea81b3ba9fd9ad43f6c8d49996aa8a87136a0f4f"} Feb 19 10:52:30 crc kubenswrapper[4780]: I0219 10:52:30.862409 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-jfdp9"] Feb 19 10:52:30 crc kubenswrapper[4780]: I0219 10:52:30.874410 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bprf7/crc-debug-jfdp9"] Feb 19 10:52:31 crc kubenswrapper[4780]: I0219 10:52:31.992854 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.136176 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26lt7\" (UniqueName: \"kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7\") pod \"78020f93-a0a2-4fea-be11-dec460f6b64a\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.136561 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host\") pod \"78020f93-a0a2-4fea-be11-dec460f6b64a\" (UID: \"78020f93-a0a2-4fea-be11-dec460f6b64a\") " Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.136675 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host" (OuterVolumeSpecName: "host") pod "78020f93-a0a2-4fea-be11-dec460f6b64a" (UID: "78020f93-a0a2-4fea-be11-dec460f6b64a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.137778 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78020f93-a0a2-4fea-be11-dec460f6b64a-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.145522 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7" (OuterVolumeSpecName: "kube-api-access-26lt7") pod "78020f93-a0a2-4fea-be11-dec460f6b64a" (UID: "78020f93-a0a2-4fea-be11-dec460f6b64a"). InnerVolumeSpecName "kube-api-access-26lt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.238758 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26lt7\" (UniqueName: \"kubernetes.io/projected/78020f93-a0a2-4fea-be11-dec460f6b64a-kube-api-access-26lt7\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.846334 4780 scope.go:117] "RemoveContainer" containerID="15abbbbc4e2c71c66fcc7084ea81b3ba9fd9ad43f6c8d49996aa8a87136a0f4f" Feb 19 10:52:32 crc kubenswrapper[4780]: I0219 10:52:32.846720 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/crc-debug-jfdp9" Feb 19 10:52:33 crc kubenswrapper[4780]: I0219 10:52:33.950594 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78020f93-a0a2-4fea-be11-dec460f6b64a" path="/var/lib/kubelet/pods/78020f93-a0a2-4fea-be11-dec460f6b64a/volumes" Feb 19 10:52:36 crc kubenswrapper[4780]: I0219 10:52:36.337020 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:52:36 crc kubenswrapper[4780]: I0219 10:52:36.337482 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:53:06 crc kubenswrapper[4780]: I0219 10:53:06.336409 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:53:06 crc kubenswrapper[4780]: I0219 10:53:06.336901 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:53:06 crc kubenswrapper[4780]: I0219 10:53:06.336956 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 10:53:06 crc kubenswrapper[4780]: I0219 10:53:06.338156 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:53:06 crc kubenswrapper[4780]: I0219 10:53:06.338221 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" gracePeriod=600 Feb 19 10:53:06 crc kubenswrapper[4780]: E0219 10:53:06.478405 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:53:07 crc kubenswrapper[4780]: I0219 10:53:07.250964 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" exitCode=0 Feb 19 10:53:07 crc kubenswrapper[4780]: I0219 10:53:07.251051 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c"} Feb 19 10:53:07 crc kubenswrapper[4780]: I0219 10:53:07.251313 4780 scope.go:117] "RemoveContainer" containerID="f44ca2e0ab50c4d9ad7d6e6fafd7a9a6aca2d3256798890629eda3da3478bc23" Feb 19 10:53:07 crc kubenswrapper[4780]: I0219 10:53:07.252291 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:53:07 crc kubenswrapper[4780]: E0219 10:53:07.252714 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:53:17 crc kubenswrapper[4780]: I0219 10:53:17.946961 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:53:17 crc kubenswrapper[4780]: E0219 10:53:17.948219 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:53:31 crc kubenswrapper[4780]: I0219 10:53:31.938583 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:53:31 crc kubenswrapper[4780]: E0219 10:53:31.941350 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:53:45 crc kubenswrapper[4780]: I0219 10:53:45.939502 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:53:45 crc kubenswrapper[4780]: E0219 10:53:45.940325 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:53:57 crc kubenswrapper[4780]: I0219 10:53:57.948763 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:53:57 crc kubenswrapper[4780]: E0219 10:53:57.949953 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:54:09 crc kubenswrapper[4780]: I0219 10:54:09.938828 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:54:09 crc kubenswrapper[4780]: E0219 10:54:09.940014 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:54:22 crc kubenswrapper[4780]: I0219 10:54:22.938973 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:54:22 crc kubenswrapper[4780]: E0219 10:54:22.940203 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.338888 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:25 crc kubenswrapper[4780]: E0219 10:54:25.339887 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78020f93-a0a2-4fea-be11-dec460f6b64a" containerName="container-00" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.339903 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="78020f93-a0a2-4fea-be11-dec460f6b64a" containerName="container-00" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.340219 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="78020f93-a0a2-4fea-be11-dec460f6b64a" containerName="container-00" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.341923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.376044 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.459059 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.459166 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.459230 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfrg\" (UniqueName: \"kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.562277 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.562403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.562495 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfrg\" (UniqueName: \"kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.563032 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.563046 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.602054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfrg\" (UniqueName: \"kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg\") pod \"redhat-marketplace-9pqdn\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:25 crc kubenswrapper[4780]: I0219 10:54:25.665601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:26 crc kubenswrapper[4780]: I0219 10:54:26.255292 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:26 crc kubenswrapper[4780]: I0219 10:54:26.295912 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerStarted","Data":"201b0232fcb185052c25486b914b43eaa8d8e1becbd264fdd0fff8f9fd6aa90e"} Feb 19 10:54:27 crc kubenswrapper[4780]: I0219 10:54:27.311595 4780 generic.go:334] "Generic (PLEG): container finished" podID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerID="72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272" exitCode=0 Feb 19 10:54:27 crc kubenswrapper[4780]: I0219 10:54:27.311719 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerDied","Data":"72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272"} Feb 19 10:54:29 crc kubenswrapper[4780]: I0219 10:54:29.334672 4780 generic.go:334] "Generic (PLEG): container finished" podID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerID="a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d" exitCode=0 Feb 19 10:54:29 crc kubenswrapper[4780]: I0219 10:54:29.334778 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerDied","Data":"a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d"} Feb 19 10:54:31 crc kubenswrapper[4780]: I0219 10:54:31.360014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerStarted","Data":"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c"} Feb 19 10:54:31 crc kubenswrapper[4780]: I0219 10:54:31.392445 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9pqdn" podStartSLOduration=2.707833552 podStartE2EDuration="6.392411081s" podCreationTimestamp="2026-02-19 10:54:25 +0000 UTC" firstStartedPulling="2026-02-19 10:54:27.315044815 +0000 UTC m=+9210.058702284" lastFinishedPulling="2026-02-19 10:54:30.999622364 +0000 UTC m=+9213.743279813" observedRunningTime="2026-02-19 10:54:31.38917439 +0000 UTC m=+9214.132831859" watchObservedRunningTime="2026-02-19 10:54:31.392411081 +0000 UTC m=+9214.136068530" Feb 19 10:54:34 crc kubenswrapper[4780]: I0219 10:54:34.939564 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:54:34 crc kubenswrapper[4780]: E0219 10:54:34.940591 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:54:35 crc kubenswrapper[4780]: I0219 10:54:35.666541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:35 crc kubenswrapper[4780]: I0219 10:54:35.666614 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:36 crc kubenswrapper[4780]: I0219 10:54:36.117166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:36 crc kubenswrapper[4780]: I0219 10:54:36.471027 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:36 crc kubenswrapper[4780]: I0219 10:54:36.535590 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:38 crc kubenswrapper[4780]: I0219 10:54:38.449194 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9pqdn" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="registry-server" containerID="cri-o://86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c" gracePeriod=2 Feb 19 10:54:38 crc kubenswrapper[4780]: I0219 10:54:38.932118 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.053335 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content\") pod \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.053469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfrg\" (UniqueName: \"kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg\") pod \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.053752 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities\") pod \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\" (UID: \"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0\") " Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.054505 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities" (OuterVolumeSpecName: "utilities") pod "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" (UID: "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.054635 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.060269 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg" (OuterVolumeSpecName: "kube-api-access-2gfrg") pod "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" (UID: "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0"). InnerVolumeSpecName "kube-api-access-2gfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.080640 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" (UID: "a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.157829 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.157877 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gfrg\" (UniqueName: \"kubernetes.io/projected/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0-kube-api-access-2gfrg\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.466356 4780 generic.go:334] "Generic (PLEG): container finished" podID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerID="86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c" exitCode=0 Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.466413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerDied","Data":"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c"} Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.466465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pqdn" event={"ID":"a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0","Type":"ContainerDied","Data":"201b0232fcb185052c25486b914b43eaa8d8e1becbd264fdd0fff8f9fd6aa90e"} Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.466485 4780 scope.go:117] "RemoveContainer" containerID="86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.466489 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pqdn" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.488414 4780 scope.go:117] "RemoveContainer" containerID="a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.512721 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.525408 4780 scope.go:117] "RemoveContainer" containerID="72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.529633 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pqdn"] Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.588721 4780 scope.go:117] "RemoveContainer" containerID="86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c" Feb 19 10:54:39 crc kubenswrapper[4780]: E0219 10:54:39.591758 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c\": container with ID starting with 86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c not found: ID does not exist" containerID="86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.591876 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c"} err="failed to get container status \"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c\": rpc error: code = NotFound desc = could not find container \"86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c\": container with ID starting with 86395bce9a4a1c8c5219c2261e311c63563322f557a3e805e1eb76f0eaa5a94c not found: ID does not exist" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.591929 4780 scope.go:117] "RemoveContainer" containerID="a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d" Feb 19 10:54:39 crc kubenswrapper[4780]: E0219 10:54:39.592704 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d\": container with ID starting with a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d not found: ID does not exist" containerID="a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.592770 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d"} err="failed to get container status \"a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d\": rpc error: code = NotFound desc = could not find container \"a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d\": container with ID starting with a2941a57d12250f8bbd0ed51e0f2f0f13ab7826c377f15af8d2b64b98b14553d not found: ID does not exist" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.592813 4780 scope.go:117] "RemoveContainer" containerID="72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272" Feb 19 10:54:39 crc kubenswrapper[4780]: E0219 10:54:39.593417 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272\": container with ID starting with 72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272 not found: ID does not exist" containerID="72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.593444 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272"} err="failed to get container status \"72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272\": rpc error: code = NotFound desc = could not find container \"72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272\": container with ID starting with 72222ffbfbd834b434570ad21b021f2db111ef83e12322615ffcac6776c00272 not found: ID does not exist" Feb 19 10:54:39 crc kubenswrapper[4780]: I0219 10:54:39.951071 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" path="/var/lib/kubelet/pods/a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0/volumes" Feb 19 10:54:45 crc kubenswrapper[4780]: I0219 10:54:45.939155 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:54:45 crc kubenswrapper[4780]: E0219 10:54:45.940051 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:54:57 crc kubenswrapper[4780]: I0219 10:54:57.947824 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:54:57 crc kubenswrapper[4780]: E0219 10:54:57.948925 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:55:11 crc kubenswrapper[4780]: I0219 10:55:11.939395 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:55:11 crc kubenswrapper[4780]: E0219 10:55:11.940405 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:55:26 crc kubenswrapper[4780]: I0219 10:55:26.938875 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:55:26 crc kubenswrapper[4780]: E0219 10:55:26.941194 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:55:37 crc kubenswrapper[4780]: I0219 10:55:37.946819 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:55:37 crc kubenswrapper[4780]: E0219 10:55:37.949249 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:55:52 crc kubenswrapper[4780]: I0219 10:55:52.939580 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:55:52 crc kubenswrapper[4780]: E0219 10:55:52.944672 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:56:04 crc kubenswrapper[4780]: I0219 10:56:04.939264 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:56:04 crc kubenswrapper[4780]: E0219 10:56:04.939989 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:56:15 crc kubenswrapper[4780]: I0219 10:56:15.939327 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:56:15 crc kubenswrapper[4780]: E0219 10:56:15.940088 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.859913 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:23 crc kubenswrapper[4780]: E0219 10:56:23.862684 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="extract-utilities" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.862734 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="extract-utilities" Feb 19 10:56:23 crc kubenswrapper[4780]: E0219 10:56:23.862810 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="extract-content" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.862819 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="extract-content" Feb 19 10:56:23 crc kubenswrapper[4780]: E0219 10:56:23.862844 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="registry-server" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.862850 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="registry-server" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.864924 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08a2db6-0e6b-43a9-bf7f-da74d2c4e6a0" containerName="registry-server" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.869696 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:23 crc kubenswrapper[4780]: I0219 10:56:23.924614 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.020699 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7dm6\" (UniqueName: \"kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.020987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.021495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.123973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.124212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7dm6\" (UniqueName: \"kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.124268 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.124976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.124973 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.147075 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7dm6\" (UniqueName: \"kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6\") pod \"redhat-operators-hkxdn\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.206722 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:24 crc kubenswrapper[4780]: I0219 10:56:24.780913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:25 crc kubenswrapper[4780]: I0219 10:56:25.716350 4780 generic.go:334] "Generic (PLEG): container finished" podID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerID="d00426e196cfcc5cc056de6c24710bf92cd33bb13f66bb1769632162856343bd" exitCode=0 Feb 19 10:56:25 crc kubenswrapper[4780]: I0219 10:56:25.716409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerDied","Data":"d00426e196cfcc5cc056de6c24710bf92cd33bb13f66bb1769632162856343bd"} Feb 19 10:56:25 crc kubenswrapper[4780]: I0219 10:56:25.716710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerStarted","Data":"25e5fed79f5e21e5c49f7f9bae6dd05467a9d82e7e4c5ef1ab0a8046ee75c243"} Feb 19 10:56:25 crc kubenswrapper[4780]: I0219 10:56:25.719241 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:56:27 crc kubenswrapper[4780]: I0219 10:56:27.756592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerStarted","Data":"178406bd171bcd9aeca1f0873d3be7c514230dc317f85f626af7a8bff6f4cfbe"} Feb 19 10:56:30 crc kubenswrapper[4780]: I0219 10:56:30.938948 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:56:30 crc kubenswrapper[4780]: E0219 10:56:30.939775 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:56:32 crc kubenswrapper[4780]: I0219 10:56:32.816458 4780 generic.go:334] "Generic (PLEG): container finished" podID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerID="178406bd171bcd9aeca1f0873d3be7c514230dc317f85f626af7a8bff6f4cfbe" exitCode=0 Feb 19 10:56:32 crc kubenswrapper[4780]: I0219 10:56:32.816586 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerDied","Data":"178406bd171bcd9aeca1f0873d3be7c514230dc317f85f626af7a8bff6f4cfbe"} Feb 19 10:56:34 crc kubenswrapper[4780]: I0219 10:56:34.841409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerStarted","Data":"b65aa5172e233e977451ca38e07e75b96aeccaae95379ab7fe8c5c59c14952e1"} Feb 19 10:56:34 crc kubenswrapper[4780]: I0219 10:56:34.892044 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hkxdn" podStartSLOduration=3.703072847 podStartE2EDuration="11.892020895s" podCreationTimestamp="2026-02-19 10:56:23 +0000 UTC" firstStartedPulling="2026-02-19 10:56:25.718935514 +0000 UTC m=+9328.462592963" lastFinishedPulling="2026-02-19 10:56:33.907883562 +0000 UTC m=+9336.651541011" observedRunningTime="2026-02-19 10:56:34.887673186 +0000 UTC m=+9337.631330635" watchObservedRunningTime="2026-02-19 10:56:34.892020895 +0000 UTC m=+9337.635678344" Feb 19 10:56:44 crc kubenswrapper[4780]: I0219 10:56:44.209461 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:44 crc kubenswrapper[4780]: I0219 10:56:44.210085 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:44 crc kubenswrapper[4780]: I0219 10:56:44.267101 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:44 crc kubenswrapper[4780]: I0219 10:56:44.938370 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:56:44 crc kubenswrapper[4780]: E0219 10:56:44.938740 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:56:45 crc kubenswrapper[4780]: I0219 10:56:45.002192 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:45 crc kubenswrapper[4780]: I0219 10:56:45.062996 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:48 crc kubenswrapper[4780]: I0219 10:56:48.282272 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hkxdn" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="registry-server" containerID="cri-o://b65aa5172e233e977451ca38e07e75b96aeccaae95379ab7fe8c5c59c14952e1" gracePeriod=2 Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.301678 4780 generic.go:334] "Generic (PLEG): container finished" podID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerID="b65aa5172e233e977451ca38e07e75b96aeccaae95379ab7fe8c5c59c14952e1" exitCode=0 Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.301770 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerDied","Data":"b65aa5172e233e977451ca38e07e75b96aeccaae95379ab7fe8c5c59c14952e1"} Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.302124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkxdn" event={"ID":"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb","Type":"ContainerDied","Data":"25e5fed79f5e21e5c49f7f9bae6dd05467a9d82e7e4c5ef1ab0a8046ee75c243"} Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.302188 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e5fed79f5e21e5c49f7f9bae6dd05467a9d82e7e4c5ef1ab0a8046ee75c243" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.406316 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.519837 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7dm6\" (UniqueName: \"kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6\") pod \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.520483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content\") pod \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.520569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities\") pod \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\" (UID: \"cbf70fee-dff9-43a7-9e0e-b0b47073c4bb\") " Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.521510 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities" (OuterVolumeSpecName: "utilities") pod "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" (UID: "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.527506 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6" (OuterVolumeSpecName: "kube-api-access-l7dm6") pod "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" (UID: "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb"). InnerVolumeSpecName "kube-api-access-l7dm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.624067 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7dm6\" (UniqueName: \"kubernetes.io/projected/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-kube-api-access-l7dm6\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.624105 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.675927 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" (UID: "cbf70fee-dff9-43a7-9e0e-b0b47073c4bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:56:49 crc kubenswrapper[4780]: I0219 10:56:49.727754 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:50 crc kubenswrapper[4780]: I0219 10:56:50.312118 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkxdn" Feb 19 10:56:50 crc kubenswrapper[4780]: I0219 10:56:50.342311 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:50 crc kubenswrapper[4780]: I0219 10:56:50.353386 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hkxdn"] Feb 19 10:56:51 crc kubenswrapper[4780]: I0219 10:56:51.964678 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" path="/var/lib/kubelet/pods/cbf70fee-dff9-43a7-9e0e-b0b47073c4bb/volumes" Feb 19 10:56:55 crc kubenswrapper[4780]: I0219 10:56:55.940233 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:56:55 crc kubenswrapper[4780]: E0219 10:56:55.941151 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:57:10 crc kubenswrapper[4780]: I0219 10:57:10.938680 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:57:10 crc kubenswrapper[4780]: E0219 10:57:10.939479 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:57:23 crc kubenswrapper[4780]: I0219 10:57:23.938024 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:57:23 crc kubenswrapper[4780]: E0219 10:57:23.938734 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:57:34 crc kubenswrapper[4780]: I0219 10:57:34.938406 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:57:34 crc kubenswrapper[4780]: E0219 10:57:34.939253 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:57:45 crc kubenswrapper[4780]: I0219 10:57:45.944567 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:57:45 crc kubenswrapper[4780]: E0219 10:57:45.945585 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:57:59 crc kubenswrapper[4780]: I0219 10:57:59.940390 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:57:59 crc kubenswrapper[4780]: E0219 10:57:59.941223 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 10:58:12 crc kubenswrapper[4780]: I0219 10:58:12.938595 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 10:58:13 crc kubenswrapper[4780]: I0219 10:58:13.284053 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc"} Feb 19 10:58:18 crc kubenswrapper[4780]: I0219 10:58:18.653223 4780 scope.go:117] "RemoveContainer" containerID="639f62ec11c606e3a260facba65c1821ccd953b323c31c300713e795137d4657" Feb 19 10:59:18 crc kubenswrapper[4780]: I0219 10:59:18.716901 4780 scope.go:117] "RemoveContainer" containerID="3d19e44eae34b58d3a1c8821cf1d93d2521bd0252828af655405924335eff490" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.162308 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84"] Feb 19 11:00:00 crc kubenswrapper[4780]: E0219 11:00:00.164039 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.164069 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4780]: E0219 11:00:00.164160 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.164173 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4780]: E0219 11:00:00.164192 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.164206 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.164482 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf70fee-dff9-43a7-9e0e-b0b47073c4bb" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.165671 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.168638 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.168731 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.179925 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84"] Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.220819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67n8j\" (UniqueName: \"kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.221321 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.221593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.324344 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.325319 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67n8j\" (UniqueName: \"kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.325468 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.326476 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.330899 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.343541 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67n8j\" (UniqueName: \"kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j\") pod \"collect-profiles-29524980-jbn84\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:00 crc kubenswrapper[4780]: I0219 11:00:00.495304 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:01 crc kubenswrapper[4780]: I0219 11:00:01.052066 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84"] Feb 19 11:00:01 crc kubenswrapper[4780]: I0219 11:00:01.484835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" event={"ID":"24b3646f-7b41-4ddd-81b4-193858743612","Type":"ContainerStarted","Data":"8a7be8607d27b430cd58fa7cc825d8a405577a49e480c80e1723072070e88930"} Feb 19 11:00:01 crc kubenswrapper[4780]: I0219 11:00:01.485164 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" event={"ID":"24b3646f-7b41-4ddd-81b4-193858743612","Type":"ContainerStarted","Data":"d8da26cd698790600832f2e7defe72dd9df9f392aabda1c60d34d54798d643e8"} Feb 19 11:00:01 crc kubenswrapper[4780]: I0219 11:00:01.502385 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" podStartSLOduration=1.5023558289999999 podStartE2EDuration="1.502355829s" podCreationTimestamp="2026-02-19 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:00:01.49959295 +0000 UTC m=+9544.243250399" watchObservedRunningTime="2026-02-19 11:00:01.502355829 +0000 UTC m=+9544.246013278" Feb 19 11:00:02 crc kubenswrapper[4780]: I0219 11:00:02.496856 4780 generic.go:334] "Generic (PLEG): container finished" podID="24b3646f-7b41-4ddd-81b4-193858743612" containerID="8a7be8607d27b430cd58fa7cc825d8a405577a49e480c80e1723072070e88930" exitCode=0 Feb 19 11:00:02 crc kubenswrapper[4780]: I0219 11:00:02.496994 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" event={"ID":"24b3646f-7b41-4ddd-81b4-193858743612","Type":"ContainerDied","Data":"8a7be8607d27b430cd58fa7cc825d8a405577a49e480c80e1723072070e88930"} Feb 19 11:00:03 crc kubenswrapper[4780]: I0219 11:00:03.924642 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.033841 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67n8j\" (UniqueName: \"kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j\") pod \"24b3646f-7b41-4ddd-81b4-193858743612\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.033977 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume\") pod \"24b3646f-7b41-4ddd-81b4-193858743612\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.034115 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume\") pod \"24b3646f-7b41-4ddd-81b4-193858743612\" (UID: \"24b3646f-7b41-4ddd-81b4-193858743612\") " Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.035048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume" (OuterVolumeSpecName: "config-volume") pod "24b3646f-7b41-4ddd-81b4-193858743612" (UID: "24b3646f-7b41-4ddd-81b4-193858743612"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.040778 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j" (OuterVolumeSpecName: "kube-api-access-67n8j") pod "24b3646f-7b41-4ddd-81b4-193858743612" (UID: "24b3646f-7b41-4ddd-81b4-193858743612"). InnerVolumeSpecName "kube-api-access-67n8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.041295 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24b3646f-7b41-4ddd-81b4-193858743612" (UID: "24b3646f-7b41-4ddd-81b4-193858743612"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.138138 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b3646f-7b41-4ddd-81b4-193858743612-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.138193 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b3646f-7b41-4ddd-81b4-193858743612-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.138208 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67n8j\" (UniqueName: \"kubernetes.io/projected/24b3646f-7b41-4ddd-81b4-193858743612-kube-api-access-67n8j\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.515900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" event={"ID":"24b3646f-7b41-4ddd-81b4-193858743612","Type":"ContainerDied","Data":"d8da26cd698790600832f2e7defe72dd9df9f392aabda1c60d34d54798d643e8"} Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.515948 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8da26cd698790600832f2e7defe72dd9df9f392aabda1c60d34d54798d643e8" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.515961 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-jbn84" Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.585921 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q"] Feb 19 11:00:04 crc kubenswrapper[4780]: I0219 11:00:04.595725 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-z7p6q"] Feb 19 11:00:05 crc kubenswrapper[4780]: I0219 11:00:05.952656 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360350fe-48d7-4722-9a17-5a20018baa6f" path="/var/lib/kubelet/pods/360350fe-48d7-4722-9a17-5a20018baa6f/volumes" Feb 19 11:00:18 crc kubenswrapper[4780]: I0219 11:00:18.776579 4780 scope.go:117] "RemoveContainer" containerID="ea5fd602e72fbf5a4acfc073c8ebd10ad437fc17c4a7eecc58d3975d52cfe33b" Feb 19 11:00:36 crc kubenswrapper[4780]: I0219 11:00:36.336586 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:00:36 crc kubenswrapper[4780]: I0219 11:00:36.337546 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.163295 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524981-b2jl8"] Feb 19 11:01:00 crc kubenswrapper[4780]: E0219 11:01:00.164380 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b3646f-7b41-4ddd-81b4-193858743612" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.164399 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b3646f-7b41-4ddd-81b4-193858743612" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.164665 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b3646f-7b41-4ddd-81b4-193858743612" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.165619 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.176160 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-b2jl8"] Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.188666 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6j7\" (UniqueName: \"kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.188715 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.188863 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.188965 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.291020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.291196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.291288 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6j7\" (UniqueName: \"kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.291316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.298082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.298227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.299955 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.315026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6j7\" (UniqueName: \"kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7\") pod \"keystone-cron-29524981-b2jl8\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:00 crc kubenswrapper[4780]: I0219 11:01:00.488170 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:01 crc kubenswrapper[4780]: I0219 11:01:01.073062 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-b2jl8"] Feb 19 11:01:01 crc kubenswrapper[4780]: I0219 11:01:01.119441 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-b2jl8" event={"ID":"08237c4e-f2a9-400e-a25f-15abc2efe0cb","Type":"ContainerStarted","Data":"f72a9a2c4dbce1f233128784dfb39a77bf7500979e45b19e5d686606ba9b00e2"} Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.135704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-b2jl8" event={"ID":"08237c4e-f2a9-400e-a25f-15abc2efe0cb","Type":"ContainerStarted","Data":"c626a0d3e9e753bcfa1904dd006c69f6e764242e2107a83e127fe3595849e370"} Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.186721 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524981-b2jl8" podStartSLOduration=2.18669175 podStartE2EDuration="2.18669175s" podCreationTimestamp="2026-02-19 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:01:02.165277283 +0000 UTC m=+9604.908934732" watchObservedRunningTime="2026-02-19 11:01:02.18669175 +0000 UTC m=+9604.930349199" Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.565413 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_30ad128e-0986-4944-8bdb-ae191d72c28d/init-config-reloader/0.log" Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.773883 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_30ad128e-0986-4944-8bdb-ae191d72c28d/init-config-reloader/0.log" Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.812268 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_30ad128e-0986-4944-8bdb-ae191d72c28d/alertmanager/0.log" Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.914979 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_30ad128e-0986-4944-8bdb-ae191d72c28d/config-reloader/0.log" Feb 19 11:01:02 crc kubenswrapper[4780]: I0219 11:01:02.998009 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8155dc58-df40-44b2-8a5f-913ece382018/aodh-api/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.101616 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8155dc58-df40-44b2-8a5f-913ece382018/aodh-evaluator/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.147307 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8155dc58-df40-44b2-8a5f-913ece382018/aodh-listener/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.293707 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8155dc58-df40-44b2-8a5f-913ece382018/aodh-notifier/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.336876 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c4d644ff4-gq2np_8d73ea82-95d2-49e5-b2a9-974c7e440807/barbican-api/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.456280 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c4d644ff4-gq2np_8d73ea82-95d2-49e5-b2a9-974c7e440807/barbican-api-log/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.553840 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5746675c94-xbv7n_e9969382-7625-4a4e-b6df-765bc78bec0c/barbican-keystone-listener/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.696231 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5746675c94-xbv7n_e9969382-7625-4a4e-b6df-765bc78bec0c/barbican-keystone-listener-log/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.798516 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f79c4dc7-dmlmt_598ae77e-d2eb-4858-9443-dc5bc697e68a/barbican-worker/0.log" Feb 19 11:01:03 crc kubenswrapper[4780]: I0219 11:01:03.840835 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f79c4dc7-dmlmt_598ae77e-d2eb-4858-9443-dc5bc697e68a/barbican-worker-log/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.020721 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-4xbq5_ce60d62e-00ef-4205-a928-7df63a1e5837/bootstrap-openstack-openstack-cell1/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.098065 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_817f9599-e2dd-4250-998f-bbd58105c51c/ceilometer-central-agent/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.264767 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_817f9599-e2dd-4250-998f-bbd58105c51c/proxy-httpd/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.277765 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_817f9599-e2dd-4250-998f-bbd58105c51c/ceilometer-notification-agent/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.352847 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_817f9599-e2dd-4250-998f-bbd58105c51c/sg-core/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.529340 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-nczgk_aa96424d-d698-4a79-a271-8150de092abf/ceph-client-openstack-openstack-cell1/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.666708 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_17681618-f82f-482e-a791-2eaa61b665b9/cinder-api/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.686218 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_17681618-f82f-482e-a791-2eaa61b665b9/cinder-api-log/0.log" Feb 19 11:01:04 crc kubenswrapper[4780]: I0219 11:01:04.989398 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_30e71513-4012-4b35-8571-9349d75bed4b/probe/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.053379 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_30e71513-4012-4b35-8571-9349d75bed4b/cinder-backup/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.144961 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1b7508c7-4a7b-4c69-ac07-655e84e602e5/cinder-scheduler/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.285562 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_88c02885-785b-46df-bbe7-259243eee84a/cinder-volume/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.288070 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1b7508c7-4a7b-4c69-ac07-655e84e602e5/probe/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.423260 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_88c02885-785b-46df-bbe7-259243eee84a/probe/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.516188 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-lpwr6_8073fbd1-1d6a-4efc-bca3-733a5deec1b3/configure-network-openstack-openstack-cell1/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.709216 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-2cn8b_ae0bf8c8-26a1-4c08-9b45-859f9bc8d65c/configure-os-openstack-openstack-cell1/0.log" Feb 19 11:01:05 crc kubenswrapper[4780]: I0219 11:01:05.880349 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9f79c7847-d2mxn_19b07627-19bc-4e68-8ff4-e2d70d76b4a2/init/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.045905 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9f79c7847-d2mxn_19b07627-19bc-4e68-8ff4-e2d70d76b4a2/init/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.093955 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9f79c7847-d2mxn_19b07627-19bc-4e68-8ff4-e2d70d76b4a2/dnsmasq-dns/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.154207 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-6st2r_ae23b8f2-95fe-4b64-a80e-c6687b239734/download-cache-openstack-openstack-cell1/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.189030 4780 generic.go:334] "Generic (PLEG): container finished" podID="08237c4e-f2a9-400e-a25f-15abc2efe0cb" containerID="c626a0d3e9e753bcfa1904dd006c69f6e764242e2107a83e127fe3595849e370" exitCode=0 Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.189105 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-b2jl8" event={"ID":"08237c4e-f2a9-400e-a25f-15abc2efe0cb","Type":"ContainerDied","Data":"c626a0d3e9e753bcfa1904dd006c69f6e764242e2107a83e127fe3595849e370"} Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.336427 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.336509 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.349680 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42b72844-a58f-497a-a9c3-0707e36e0bb5/glance-httpd/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.360668 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42b72844-a58f-497a-a9c3-0707e36e0bb5/glance-log/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.491306 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87/glance-httpd/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.536487 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6dc9c0b-c8ab-4695-b792-3dd0ddc1bd87/glance-log/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.748902 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-755766ddd-nmcl6_cfccb4ce-3da9-487a-b154-ed091e2a0a60/heat-api/0.log" Feb 19 11:01:06 crc kubenswrapper[4780]: I0219 11:01:06.890092 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-d7c68c799-9jfcj_1493d17c-fff8-4696-a7e3-b5c686cb2b82/heat-cfnapi/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.031707 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7cbb6c7cf-xmjgd_dff9e661-9dbc-45a0-8545-63574793fc59/heat-engine/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.212849 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cf49b6979-xt4hk_bb0938c1-1dba-442f-ba05-e445bb201c42/horizon/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.242361 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cf49b6979-xt4hk_bb0938c1-1dba-442f-ba05-e445bb201c42/horizon-log/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.298191 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-dlpvv_76cb5bb5-2704-4830-a56c-da79652d9656/install-certs-openstack-openstack-cell1/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.470442 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-kkk8v_3e92bcc6-d6ee-4a29-ad83-37b71a80085f/install-os-openstack-openstack-cell1/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.630402 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.698415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle\") pod \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.698880 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys\") pod \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.699040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb6j7\" (UniqueName: \"kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7\") pod \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.699198 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data\") pod \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\" (UID: \"08237c4e-f2a9-400e-a25f-15abc2efe0cb\") " Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.710102 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "08237c4e-f2a9-400e-a25f-15abc2efe0cb" (UID: "08237c4e-f2a9-400e-a25f-15abc2efe0cb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.711047 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524921-r86qv_77b6fd2d-36a9-4498-a369-266c0b665a28/keystone-cron/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.714672 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7" (OuterVolumeSpecName: "kube-api-access-tb6j7") pod "08237c4e-f2a9-400e-a25f-15abc2efe0cb" (UID: "08237c4e-f2a9-400e-a25f-15abc2efe0cb"). InnerVolumeSpecName "kube-api-access-tb6j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.732017 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c45fc85b7-6d97x_5cf1e4e1-34c3-4c03-b059-bb276bf2c9e3/keystone-api/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.751371 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08237c4e-f2a9-400e-a25f-15abc2efe0cb" (UID: "08237c4e-f2a9-400e-a25f-15abc2efe0cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.780456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data" (OuterVolumeSpecName: "config-data") pod "08237c4e-f2a9-400e-a25f-15abc2efe0cb" (UID: "08237c4e-f2a9-400e-a25f-15abc2efe0cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.801711 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.801920 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb6j7\" (UniqueName: \"kubernetes.io/projected/08237c4e-f2a9-400e-a25f-15abc2efe0cb-kube-api-access-tb6j7\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.801988 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.802045 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08237c4e-f2a9-400e-a25f-15abc2efe0cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.899294 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524981-b2jl8_08237c4e-f2a9-400e-a25f-15abc2efe0cb/keystone-cron/0.log" Feb 19 11:01:07 crc kubenswrapper[4780]: I0219 11:01:07.926812 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9d263497-7e86-4475-a86e-c49fa0b57cf3/kube-state-metrics/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.078037 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-crkmk_e1ea75b8-e3be-4982-ad66-e85b3ae2a8de/libvirt-openstack-openstack-cell1/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.208309 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-b2jl8" event={"ID":"08237c4e-f2a9-400e-a25f-15abc2efe0cb","Type":"ContainerDied","Data":"f72a9a2c4dbce1f233128784dfb39a77bf7500979e45b19e5d686606ba9b00e2"} Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.208383 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a9a2c4dbce1f233128784dfb39a77bf7500979e45b19e5d686606ba9b00e2" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.208414 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-b2jl8" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.233003 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5c366691-b5b9-4dd7-a9c1-0ac7a6542898/manila-api/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.283028 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5c366691-b5b9-4dd7-a9c1-0ac7a6542898/manila-api-log/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.389238 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e9a884f6-60ed-4e43-9d3d-9c005737cc3d/manila-scheduler/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.958916 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1211f1b2-3545-4ac9-8913-63ee3ed133ad/probe/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.972481 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e9a884f6-60ed-4e43-9d3d-9c005737cc3d/probe/0.log" Feb 19 11:01:08 crc kubenswrapper[4780]: I0219 11:01:08.977851 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1211f1b2-3545-4ac9-8913-63ee3ed133ad/manila-share/0.log" Feb 19 11:01:09 crc kubenswrapper[4780]: I0219 11:01:09.321828 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86ddc9fb9f-mwj2f_399eb34a-c13c-4454-849b-81645c2d6d44/neutron-api/0.log" Feb 19 11:01:09 crc kubenswrapper[4780]: I0219 11:01:09.380217 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86ddc9fb9f-mwj2f_399eb34a-c13c-4454-849b-81645c2d6d44/neutron-httpd/0.log" Feb 19 11:01:09 crc kubenswrapper[4780]: I0219 11:01:09.525581 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-jdh54_96533bdb-10d6-4b37-bbbb-45f209e746d8/neutron-dhcp-openstack-openstack-cell1/0.log" Feb 19 11:01:09 crc kubenswrapper[4780]: I0219 11:01:09.649646 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-wjpcn_24198e28-4159-44b0-ac69-e42faa76272a/neutron-metadata-openstack-openstack-cell1/0.log" Feb 19 11:01:09 crc kubenswrapper[4780]: I0219 11:01:09.839290 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-8ghz7_e64ad28f-dcfa-4fca-b69c-53cec95474d9/neutron-sriov-openstack-openstack-cell1/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.022083 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b538280-e74c-4c4a-8f3f-9f6d12254a76/nova-api-api/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.140271 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b538280-e74c-4c4a-8f3f-9f6d12254a76/nova-api-log/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.270456 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_790dc4cb-be5e-435f-b67b-81b27bbe7048/nova-cell0-conductor-conductor/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.436454 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1fc78037-0a58-45a0-9beb-445eb1327707/nova-cell1-conductor-conductor/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.794204 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4545c739-d057-47e0-820b-a3e73c74ecd8/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 11:01:10 crc kubenswrapper[4780]: I0219 11:01:10.908445 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrfdpd_4ec28c41-efa8-4b38-8c39-784760e93e05/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Feb 19 11:01:11 crc kubenswrapper[4780]: I0219 11:01:11.498890 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-96mpg_8ff8dfaa-4716-4e9d-bd6c-d6a5afc2fd56/nova-cell1-openstack-openstack-cell1/0.log" Feb 19 11:01:11 crc kubenswrapper[4780]: I0219 11:01:11.552267 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2047b7bd-ad35-4f35-a73b-1f984bf3891b/nova-metadata-log/0.log" Feb 19 11:01:11 crc kubenswrapper[4780]: I0219 11:01:11.714469 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2047b7bd-ad35-4f35-a73b-1f984bf3891b/nova-metadata-metadata/0.log" Feb 19 11:01:11 crc kubenswrapper[4780]: I0219 11:01:11.869905 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_87290173-ccab-49e6-8f60-4bfeabd11a37/nova-scheduler-scheduler/0.log" Feb 19 11:01:11 crc kubenswrapper[4780]: I0219 11:01:11.928065 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-cc57564bf-2fkwj_81ea7c27-2325-4ff4-95f3-beb9d2aff6d1/init/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.221615 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-cc57564bf-2fkwj_81ea7c27-2325-4ff4-95f3-beb9d2aff6d1/init/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.377032 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-cc57564bf-2fkwj_81ea7c27-2325-4ff4-95f3-beb9d2aff6d1/octavia-api-provider-agent/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.518847 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-q4924_c7858366-1626-44dc-885d-78118fe2c43d/init/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.603446 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-cc57564bf-2fkwj_81ea7c27-2325-4ff4-95f3-beb9d2aff6d1/octavia-api/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.845513 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-q4924_c7858366-1626-44dc-885d-78118fe2c43d/init/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.860743 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9x46n_b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6/init/0.log" Feb 19 11:01:12 crc kubenswrapper[4780]: I0219 11:01:12.914323 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-q4924_c7858366-1626-44dc-885d-78118fe2c43d/octavia-healthmanager/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.247670 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9x46n_b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6/init/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.370450 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9x46n_b89e7e37-ecc6-4aad-a0d8-c096b7b2eff6/octavia-housekeeping/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.413987 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-gpkh2_2ab7a7d5-41e0-4452-9088-63530b72c172/init/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.553955 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-gpkh2_2ab7a7d5-41e0-4452-9088-63530b72c172/init/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.642038 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-gpkh2_2ab7a7d5-41e0-4452-9088-63530b72c172/octavia-amphora-httpd/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.716661 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-nx6fv_a0a6c5f6-8431-4468-9639-5f83a903d0ab/init/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.931121 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-nx6fv_a0a6c5f6-8431-4468-9639-5f83a903d0ab/octavia-rsyslog/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.937438 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8fcqd_ca4ad79a-75c5-46de-8d49-56d2f5bab086/init/0.log" Feb 19 11:01:13 crc kubenswrapper[4780]: I0219 11:01:13.954435 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-nx6fv_a0a6c5f6-8431-4468-9639-5f83a903d0ab/init/0.log" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.133209 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:14 crc kubenswrapper[4780]: E0219 11:01:14.140972 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08237c4e-f2a9-400e-a25f-15abc2efe0cb" containerName="keystone-cron" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.141003 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="08237c4e-f2a9-400e-a25f-15abc2efe0cb" containerName="keystone-cron" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.141558 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="08237c4e-f2a9-400e-a25f-15abc2efe0cb" containerName="keystone-cron" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.145309 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.152388 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.227001 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8fcqd_ca4ad79a-75c5-46de-8d49-56d2f5bab086/init/0.log" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.260999 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26x8\" (UniqueName: \"kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.261154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.261274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.322622 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9826a80d-cdd0-4ed4-b32a-6a25d2979e68/mysql-bootstrap/0.log" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.362751 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26x8\" (UniqueName: \"kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.362842 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.362937 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.363428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.363579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.397106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26x8\" (UniqueName: \"kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8\") pod \"certified-operators-27v8v\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.489937 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.577071 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8fcqd_ca4ad79a-75c5-46de-8d49-56d2f5bab086/octavia-worker/0.log" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.608387 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9826a80d-cdd0-4ed4-b32a-6a25d2979e68/mysql-bootstrap/0.log" Feb 19 11:01:14 crc kubenswrapper[4780]: I0219 11:01:14.709975 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9826a80d-cdd0-4ed4-b32a-6a25d2979e68/galera/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.079985 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c2a3f34-a456-4b03-bdea-0493bcb47f00/mysql-bootstrap/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.130326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.290832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerStarted","Data":"485ada8843ebe9a9509e371f208e786cb500aec90edade5699a310a4d2ebf751"} Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.396330 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_09bafe4c-f2c6-4736-b731-c3ce9f68f18f/openstackclient/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.403950 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c2a3f34-a456-4b03-bdea-0493bcb47f00/mysql-bootstrap/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.491532 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c2a3f34-a456-4b03-bdea-0493bcb47f00/galera/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.727604 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4lqmn_417d0039-dd62-4b81-bcb7-5859c1d11b4e/ovn-controller/0.log" Feb 19 11:01:15 crc kubenswrapper[4780]: I0219 11:01:15.846272 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t8bv6_e1682f87-dd9a-4fc6-96df-f50c80a4af9e/openstack-network-exporter/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.037592 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l695t_ed01b93b-9b96-45fd-ac68-1ca3e9891906/ovsdb-server-init/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.306797 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l695t_ed01b93b-9b96-45fd-ac68-1ca3e9891906/ovsdb-server/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.315193 4780 generic.go:334] "Generic (PLEG): container finished" podID="115b6378-004d-4ad9-96b2-afd59731c76d" containerID="ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3" exitCode=0 Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.315303 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerDied","Data":"ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3"} Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.333406 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l695t_ed01b93b-9b96-45fd-ac68-1ca3e9891906/ovs-vswitchd/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.373287 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l695t_ed01b93b-9b96-45fd-ac68-1ca3e9891906/ovsdb-server-init/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.577085 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1e3aa9e-9dc2-4815-b2e1-9707609725ea/openstack-network-exporter/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.668999 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1e3aa9e-9dc2-4815-b2e1-9707609725ea/ovn-northd/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.813187 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-5czgg_4813ef2e-bae5-43f2-b914-a755f4cac0ad/ovn-openstack-openstack-cell1/0.log" Feb 19 11:01:16 crc kubenswrapper[4780]: I0219 11:01:16.941235 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_299d8395-d188-40f6-8527-b6cfc8084475/openstack-network-exporter/0.log" Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.092072 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_299d8395-d188-40f6-8527-b6cfc8084475/ovsdbserver-nb/0.log" Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.350585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerStarted","Data":"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f"} Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.660756 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_68f9285d-acc2-417e-b046-c6991dd305c8/ovsdbserver-nb/0.log" Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.707148 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_68f9285d-acc2-417e-b046-c6991dd305c8/openstack-network-exporter/0.log" Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.769956 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_e1b5ec7b-ac41-4149-93ae-7240d7bf6008/openstack-network-exporter/0.log" Feb 19 11:01:17 crc kubenswrapper[4780]: I0219 11:01:17.931620 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_e1b5ec7b-ac41-4149-93ae-7240d7bf6008/ovsdbserver-nb/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.045190 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_25c6eb26-4b88-4ced-a69d-0527da97ed8e/openstack-network-exporter/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.097847 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_25c6eb26-4b88-4ced-a69d-0527da97ed8e/ovsdbserver-sb/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.269341 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b/openstack-network-exporter/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.306046 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_fe92b3dd-5dcb-4d6e-bcef-13cf4e25bf3b/ovsdbserver-sb/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.366303 4780 generic.go:334] "Generic (PLEG): container finished" podID="115b6378-004d-4ad9-96b2-afd59731c76d" containerID="16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f" exitCode=0 Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.366347 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerDied","Data":"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f"} Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.438775 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_400a0a9a-193f-4191-aff8-2549e9f04533/openstack-network-exporter/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.496391 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_400a0a9a-193f-4191-aff8-2549e9f04533/ovsdbserver-sb/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.698759 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59d4db5886-9lhpt_79058d64-55d1-481a-92a4-65f605b10a3b/placement-api/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.792172 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59d4db5886-9lhpt_79058d64-55d1-481a-92a4-65f605b10a3b/placement-log/0.log" Feb 19 11:01:18 crc kubenswrapper[4780]: I0219 11:01:18.893267 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cncfj9_bb82fc50-146a-4618-9e41-1372bf42a5d4/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.050826 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a321740f-f577-4e8c-816d-95b714f098c7/init-config-reloader/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.284066 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a321740f-f577-4e8c-816d-95b714f098c7/init-config-reloader/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.301948 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a321740f-f577-4e8c-816d-95b714f098c7/prometheus/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.357312 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a321740f-f577-4e8c-816d-95b714f098c7/config-reloader/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.378834 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerStarted","Data":"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3"} Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.411276 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-27v8v" podStartSLOduration=2.9353273189999998 podStartE2EDuration="5.411249461s" podCreationTimestamp="2026-02-19 11:01:14 +0000 UTC" firstStartedPulling="2026-02-19 11:01:16.31899156 +0000 UTC m=+9619.062649009" lastFinishedPulling="2026-02-19 11:01:18.794913702 +0000 UTC m=+9621.538571151" observedRunningTime="2026-02-19 11:01:19.404501052 +0000 UTC m=+9622.148158501" watchObservedRunningTime="2026-02-19 11:01:19.411249461 +0000 UTC m=+9622.154906920" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.516226 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a321740f-f577-4e8c-816d-95b714f098c7/thanos-sidecar/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.677513 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b1ad9ff-a229-4a5d-ae0a-4df21033325a/setup-container/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.778377 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ae4a358a-9b1f-47a2-9e43-bed0e117ff1d/memcached/0.log" Feb 19 11:01:19 crc kubenswrapper[4780]: I0219 11:01:19.824994 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b1ad9ff-a229-4a5d-ae0a-4df21033325a/setup-container/0.log" Feb 19 11:01:20 crc kubenswrapper[4780]: I0219 11:01:20.595187 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e3cadc0-1b7f-43a8-b290-f0f76853af4a/setup-container/0.log" Feb 19 11:01:20 crc kubenswrapper[4780]: I0219 11:01:20.633504 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b1ad9ff-a229-4a5d-ae0a-4df21033325a/rabbitmq/0.log" Feb 19 11:01:20 crc kubenswrapper[4780]: I0219 11:01:20.814807 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e3cadc0-1b7f-43a8-b290-f0f76853af4a/setup-container/0.log" Feb 19 11:01:20 crc kubenswrapper[4780]: I0219 11:01:20.842278 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-pxmc7_6924c64f-eb99-482b-ac0d-97737deb9e6c/reboot-os-openstack-openstack-cell1/0.log" Feb 19 11:01:20 crc kubenswrapper[4780]: I0219 11:01:20.990886 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-dbmsk_c5a28a0e-38b0-4936-be75-0b2e880b0696/run-os-openstack-openstack-cell1/0.log" Feb 19 11:01:21 crc kubenswrapper[4780]: I0219 11:01:21.256753 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-k2pj5_4f14b4e2-5e66-494b-ac59-559917345d5e/ssh-known-hosts-openstack/0.log" Feb 19 11:01:21 crc kubenswrapper[4780]: I0219 11:01:21.270479 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e3cadc0-1b7f-43a8-b290-f0f76853af4a/rabbitmq/0.log" Feb 19 11:01:21 crc kubenswrapper[4780]: I0219 11:01:21.627066 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-bpgr5_48bfa98a-2036-4c70-a61d-11579ff28164/telemetry-openstack-openstack-cell1/0.log" Feb 19 11:01:21 crc kubenswrapper[4780]: I0219 11:01:21.890740 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-4dwt2_d4bcad9c-e8e3-4090-ac8b-015bfce05a61/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Feb 19 11:01:21 crc kubenswrapper[4780]: I0219 11:01:21.934903 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-6l66j_3d28700e-0688-4901-85e5-cbef8194588b/validate-network-openstack-openstack-cell1/0.log" Feb 19 11:01:24 crc kubenswrapper[4780]: I0219 11:01:24.491106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:24 crc kubenswrapper[4780]: I0219 11:01:24.491171 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:24 crc kubenswrapper[4780]: I0219 11:01:24.547926 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:25 crc kubenswrapper[4780]: I0219 11:01:25.509657 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:25 crc kubenswrapper[4780]: I0219 11:01:25.583571 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:27 crc kubenswrapper[4780]: I0219 11:01:27.473394 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27v8v" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="registry-server" containerID="cri-o://4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3" gracePeriod=2 Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.036499 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.149759 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities\") pod \"115b6378-004d-4ad9-96b2-afd59731c76d\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.150466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content\") pod \"115b6378-004d-4ad9-96b2-afd59731c76d\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.150585 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z26x8\" (UniqueName: \"kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8\") pod \"115b6378-004d-4ad9-96b2-afd59731c76d\" (UID: \"115b6378-004d-4ad9-96b2-afd59731c76d\") " Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.153224 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities" (OuterVolumeSpecName: "utilities") pod "115b6378-004d-4ad9-96b2-afd59731c76d" (UID: "115b6378-004d-4ad9-96b2-afd59731c76d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.196754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8" (OuterVolumeSpecName: "kube-api-access-z26x8") pod "115b6378-004d-4ad9-96b2-afd59731c76d" (UID: "115b6378-004d-4ad9-96b2-afd59731c76d"). InnerVolumeSpecName "kube-api-access-z26x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.219253 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "115b6378-004d-4ad9-96b2-afd59731c76d" (UID: "115b6378-004d-4ad9-96b2-afd59731c76d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.253715 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.253755 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z26x8\" (UniqueName: \"kubernetes.io/projected/115b6378-004d-4ad9-96b2-afd59731c76d-kube-api-access-z26x8\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.253766 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115b6378-004d-4ad9-96b2-afd59731c76d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.491099 4780 generic.go:334] "Generic (PLEG): container finished" podID="115b6378-004d-4ad9-96b2-afd59731c76d" containerID="4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3" exitCode=0 Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.491177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerDied","Data":"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3"} Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.491209 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27v8v" event={"ID":"115b6378-004d-4ad9-96b2-afd59731c76d","Type":"ContainerDied","Data":"485ada8843ebe9a9509e371f208e786cb500aec90edade5699a310a4d2ebf751"} Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.491231 4780 scope.go:117] "RemoveContainer" containerID="4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.491416 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27v8v" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.518837 4780 scope.go:117] "RemoveContainer" containerID="16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.549669 4780 scope.go:117] "RemoveContainer" containerID="ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.568215 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.579824 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27v8v"] Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.606357 4780 scope.go:117] "RemoveContainer" containerID="4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3" Feb 19 11:01:28 crc kubenswrapper[4780]: E0219 11:01:28.610831 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3\": container with ID starting with 4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3 not found: ID does not exist" containerID="4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.610898 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3"} err="failed to get container status \"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3\": rpc error: code = NotFound desc = could not find container \"4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3\": container with ID starting with 4d130e992d42e2077367001b783a4b14040772b8f79d2d9db8b01e287ee36cf3 not found: ID does not exist" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.610940 4780 scope.go:117] "RemoveContainer" containerID="16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f" Feb 19 11:01:28 crc kubenswrapper[4780]: E0219 11:01:28.611400 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f\": container with ID starting with 16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f not found: ID does not exist" containerID="16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.611439 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f"} err="failed to get container status \"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f\": rpc error: code = NotFound desc = could not find container \"16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f\": container with ID starting with 16b9c7d5e06f62acc9bd7ddefdb2e6d5ceed8c5858fdffcf2c722cb980533f1f not found: ID does not exist" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.611464 4780 scope.go:117] "RemoveContainer" containerID="ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3" Feb 19 11:01:28 crc kubenswrapper[4780]: E0219 11:01:28.611778 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3\": container with ID starting with ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3 not found: ID does not exist" containerID="ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3" Feb 19 11:01:28 crc kubenswrapper[4780]: I0219 11:01:28.611796 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3"} err="failed to get container status \"ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3\": rpc error: code = NotFound desc = could not find container \"ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3\": container with ID starting with ea14161e52280f91e60f8d519a1eb21c99538e4927bd10d2b98e53cc5ec6abf3 not found: ID does not exist" Feb 19 11:01:29 crc kubenswrapper[4780]: I0219 11:01:29.958226 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" path="/var/lib/kubelet/pods/115b6378-004d-4ad9-96b2-afd59731c76d/volumes" Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.335877 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.336552 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.336602 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.337471 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.337526 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc" gracePeriod=600 Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.587266 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc" exitCode=0 Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.587741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc"} Feb 19 11:01:36 crc kubenswrapper[4780]: I0219 11:01:36.587809 4780 scope.go:117] "RemoveContainer" containerID="082adde1868a54c58b88fa8c424034f965597f7714d75653ba3271154c1b0c1c" Feb 19 11:01:37 crc kubenswrapper[4780]: I0219 11:01:37.614720 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a"} Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.288258 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/util/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.498391 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/pull/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.503037 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/util/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.549987 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/pull/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.822317 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/pull/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.843967 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/extract/0.log" Feb 19 11:01:49 crc kubenswrapper[4780]: I0219 11:01:49.849149 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kh8nl_14e7421b-1fd7-4d9d-995c-c2855cc56779/util/0.log" Feb 19 11:01:50 crc kubenswrapper[4780]: I0219 11:01:50.451054 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bw4n6_0bd45130-dc60-4a0b-882d-10f9fbb742d2/manager/0.log" Feb 19 11:01:50 crc kubenswrapper[4780]: I0219 11:01:50.888859 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-pqwdn_7fa9e6d3-35ef-4e34-908f-709a5e3980b3/manager/0.log" Feb 19 11:01:51 crc kubenswrapper[4780]: I0219 11:01:51.347410 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-ln8qt_49bbe48e-3c79-422c-a85b-15198ec1a88f/manager/0.log" Feb 19 11:01:51 crc kubenswrapper[4780]: I0219 11:01:51.682255 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-kcn42_4118293d-1deb-4ce3-92e8-6055d0bc5000/manager/0.log" Feb 19 11:01:52 crc kubenswrapper[4780]: I0219 11:01:52.333200 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-glkqc_53377a47-fc5a-452e-84ca-235e1d71311c/manager/0.log" Feb 19 11:01:53 crc kubenswrapper[4780]: I0219 11:01:53.262447 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-bctgj_a2874085-9630-45db-aaa1-2e01dd53d11f/manager/0.log" Feb 19 11:01:53 crc kubenswrapper[4780]: I0219 11:01:53.531864 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-474lg_ad05a5f1-785e-4342-856b-e717d51e36bc/manager/0.log" Feb 19 11:01:53 crc kubenswrapper[4780]: I0219 11:01:53.625249 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-b6vcv_36d2971c-bf26-4327-9144-f5faa7490b05/manager/0.log" Feb 19 11:01:53 crc kubenswrapper[4780]: I0219 11:01:53.791530 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-gvbzc_2153b5f8-a977-41b6-a736-659e1a71cb99/manager/0.log" Feb 19 11:01:53 crc kubenswrapper[4780]: I0219 11:01:53.927589 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-pvff4_ee0ca95b-15e2-4d79-84d1-8600d083dbb0/manager/0.log" Feb 19 11:01:54 crc kubenswrapper[4780]: I0219 11:01:54.209824 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-rxkld_2744300e-54a9-4fba-88a0-fe6741f88116/manager/0.log" Feb 19 11:01:54 crc kubenswrapper[4780]: I0219 11:01:54.547812 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jmvjt_efb49c0a-bd0d-4ad4-befd-1e4a645afcc0/manager/0.log" Feb 19 11:01:54 crc kubenswrapper[4780]: I0219 11:01:54.663980 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-r82s8_506816c7-86de-45c3-800d-96fe50b629f1/manager/0.log" Feb 19 11:01:55 crc kubenswrapper[4780]: I0219 11:01:55.053322 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-gbqtm_e6727151-55c9-47ba-b54e-45938c21180a/operator/0.log" Feb 19 11:01:55 crc kubenswrapper[4780]: I0219 11:01:55.497937 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-pbw7v_4500f812-fa02-4888-8c0c-0627f7bbccf9/manager/0.log" Feb 19 11:01:55 crc kubenswrapper[4780]: I0219 11:01:55.562638 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g4spf_f7592074-215c-43d2-aa60-edaf6a5a141c/registry-server/0.log" Feb 19 11:01:55 crc kubenswrapper[4780]: I0219 11:01:55.879569 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-lbtq7_a0decbbd-1c8f-4bb7-bcdc-4d930757d0f2/manager/0.log" Feb 19 11:01:55 crc kubenswrapper[4780]: I0219 11:01:55.900156 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-cc8zw_5c421e2a-bf97-429f-9cb1-8945c54d4927/manager/0.log" Feb 19 11:01:56 crc kubenswrapper[4780]: I0219 11:01:56.221289 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9lssb_53b4b555-856b-4db0-b8e5-de61ff768cc6/operator/0.log" Feb 19 11:01:56 crc kubenswrapper[4780]: I0219 11:01:56.386851 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-rmrs9_7a8f8f4b-597a-4eb7-b416-81ae3f73e306/manager/0.log" Feb 19 11:01:56 crc kubenswrapper[4780]: I0219 11:01:56.759433 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-gwz8w_e6ba6725-19ae-4588-9ff1-a9830487fa82/manager/0.log" Feb 19 11:01:56 crc kubenswrapper[4780]: I0219 11:01:56.899191 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-6dr7r_97826b66-db76-40d7-a06f-cb6f55739cc9/manager/0.log" Feb 19 11:01:56 crc kubenswrapper[4780]: I0219 11:01:56.982089 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-czsms_629b235e-a906-442c-b653-c829f6f4e4bd/manager/0.log" Feb 19 11:01:59 crc kubenswrapper[4780]: I0219 11:01:59.204038 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-djmnp_9466b8d7-85ef-4709-ae7c-87f0bf531fe0/manager/0.log" Feb 19 11:01:59 crc kubenswrapper[4780]: I0219 11:01:59.495374 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-dk2pj_1d22904f-de9c-407e-9757-72c0eca19ea1/manager/0.log" Feb 19 11:02:25 crc kubenswrapper[4780]: I0219 11:02:25.480139 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mjf56_d73a1444-2bd2-4dd2-a7d6-9bf1ba3e5dc7/control-plane-machine-set-operator/0.log" Feb 19 11:02:25 crc kubenswrapper[4780]: I0219 11:02:25.724993 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9s5t_ad7d9950-0edb-4999-86d0-269be581a7f7/kube-rbac-proxy/0.log" Feb 19 11:02:25 crc kubenswrapper[4780]: I0219 11:02:25.807359 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9s5t_ad7d9950-0edb-4999-86d0-269be581a7f7/machine-api-operator/0.log" Feb 19 11:02:41 crc kubenswrapper[4780]: I0219 11:02:41.256288 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-27s7m_0fd4d870-a923-461c-bfa8-c9bfe0d87c1a/cert-manager-controller/0.log" Feb 19 11:02:41 crc kubenswrapper[4780]: I0219 11:02:41.434779 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-hfz58_8e39e1f9-ed95-4bc3-8ab3-c786da63825c/cert-manager-cainjector/0.log" Feb 19 11:02:41 crc kubenswrapper[4780]: I0219 11:02:41.505907 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8r6cz_70c8faf1-2778-49cb-a695-9a23b3df8652/cert-manager-webhook/0.log" Feb 19 11:02:55 crc kubenswrapper[4780]: I0219 11:02:55.998344 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-6z999_d9259cc4-9cb2-4f82-8c90-bf9ee6871fe3/nmstate-console-plugin/0.log" Feb 19 11:02:56 crc kubenswrapper[4780]: I0219 11:02:56.121644 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c8gqg_1cba9428-b26b-44ee-84c5-cac06ce86f4d/nmstate-handler/0.log" Feb 19 11:02:56 crc kubenswrapper[4780]: I0219 11:02:56.215069 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-btq8f_f48e99ea-198f-48d8-b2ef-83602d80118b/kube-rbac-proxy/0.log" Feb 19 11:02:56 crc kubenswrapper[4780]: I0219 11:02:56.231581 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-btq8f_f48e99ea-198f-48d8-b2ef-83602d80118b/nmstate-metrics/0.log" Feb 19 11:02:56 crc kubenswrapper[4780]: I0219 11:02:56.364251 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-nm99k_43fa6e0c-4b7c-4bcd-b9f3-f9ac6e146e54/nmstate-operator/0.log" Feb 19 11:02:56 crc kubenswrapper[4780]: I0219 11:02:56.505180 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-7xww2_c38ec25b-ac0c-4f99-a3c9-ca226d8aa544/nmstate-webhook/0.log" Feb 19 11:03:10 crc kubenswrapper[4780]: I0219 11:03:10.417670 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mjrgp_43b00acb-5aa9-4e89-8eaf-43217205623b/prometheus-operator/0.log" Feb 19 11:03:10 crc kubenswrapper[4780]: I0219 11:03:10.655082 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm_def2ec98-a720-44e2-b7ae-e4b917a073e0/prometheus-operator-admission-webhook/0.log" Feb 19 11:03:10 crc kubenswrapper[4780]: I0219 11:03:10.655690 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g_9d5cb621-e571-4a9c-b564-f9ce5b07295f/prometheus-operator-admission-webhook/0.log" Feb 19 11:03:10 crc kubenswrapper[4780]: I0219 11:03:10.852758 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4fxhv_df4e54f0-d1df-44af-ba63-0e8a6791a6d0/operator/0.log" Feb 19 11:03:10 crc kubenswrapper[4780]: I0219 11:03:10.923508 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nx8q8_f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a/perses-operator/0.log" Feb 19 11:03:19 crc kubenswrapper[4780]: I0219 11:03:19.180235 4780 scope.go:117] "RemoveContainer" containerID="178406bd171bcd9aeca1f0873d3be7c514230dc317f85f626af7a8bff6f4cfbe" Feb 19 11:03:19 crc kubenswrapper[4780]: I0219 11:03:19.232853 4780 scope.go:117] "RemoveContainer" containerID="b65aa5172e233e977451ca38e07e75b96aeccaae95379ab7fe8c5c59c14952e1" Feb 19 11:03:19 crc kubenswrapper[4780]: I0219 11:03:19.330915 4780 scope.go:117] "RemoveContainer" containerID="d00426e196cfcc5cc056de6c24710bf92cd33bb13f66bb1769632162856343bd" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.077633 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-xq67r_e3686843-4ebe-479a-9ed7-08aa57f7aa39/kube-rbac-proxy/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.509895 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-xq67r_e3686843-4ebe-479a-9ed7-08aa57f7aa39/controller/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.531958 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-frr-files/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.754885 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-frr-files/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.785937 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-metrics/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.857609 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-reloader/0.log" Feb 19 11:03:28 crc kubenswrapper[4780]: I0219 11:03:28.863105 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-reloader/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.476020 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-reloader/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.511696 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-metrics/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.520407 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-metrics/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.528972 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-frr-files/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.762415 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-frr-files/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.781859 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-reloader/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.825948 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/cp-metrics/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.840424 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/controller/0.log" Feb 19 11:03:29 crc kubenswrapper[4780]: I0219 11:03:29.994838 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/frr-metrics/0.log" Feb 19 11:03:30 crc kubenswrapper[4780]: I0219 11:03:30.080207 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/kube-rbac-proxy/0.log" Feb 19 11:03:30 crc kubenswrapper[4780]: I0219 11:03:30.090347 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/kube-rbac-proxy-frr/0.log" Feb 19 11:03:30 crc kubenswrapper[4780]: I0219 11:03:30.222147 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/reloader/0.log" Feb 19 11:03:30 crc kubenswrapper[4780]: I0219 11:03:30.310836 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-59pnp_4cda5e67-a145-4db5-b4f7-5a0dca33ccd3/frr-k8s-webhook-server/0.log" Feb 19 11:03:30 crc kubenswrapper[4780]: I0219 11:03:30.636243 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-595746788d-rtthh_cf9a966b-352f-4e6c-9b56-e44c711b07d7/manager/0.log" Feb 19 11:03:31 crc kubenswrapper[4780]: I0219 11:03:31.420300 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b5d8f86db-qcszk_7f234297-fdfa-4fb9-89e2-0d970160c0a4/webhook-server/0.log" Feb 19 11:03:31 crc kubenswrapper[4780]: I0219 11:03:31.480381 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2dzr_16814ff5-74e6-4366-b12c-683bd1a455d0/kube-rbac-proxy/0.log" Feb 19 11:03:32 crc kubenswrapper[4780]: I0219 11:03:32.726864 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2dzr_16814ff5-74e6-4366-b12c-683bd1a455d0/speaker/0.log" Feb 19 11:03:33 crc kubenswrapper[4780]: I0219 11:03:33.813843 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-txb2m_d67fd9e4-d9ed-457d-9b03-d226672c5e12/frr/0.log" Feb 19 11:03:36 crc kubenswrapper[4780]: I0219 11:03:36.336904 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:03:36 crc kubenswrapper[4780]: I0219 11:03:36.337408 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.145313 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/util/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.354509 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/util/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.400819 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/pull/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.435606 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/pull/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.642445 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/extract/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.658252 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/util/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.671617 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56sgsn_a99ca6ec-4c17-45ff-b344-70e65b475774/pull/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.836548 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/util/0.log" Feb 19 11:03:47 crc kubenswrapper[4780]: I0219 11:03:47.997740 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/util/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.012184 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.055950 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.230356 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/util/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.231443 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/extract/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.239847 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vl5pd_ab6a0221-99a2-41ea-817f-391f39843ea8/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.405551 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/util/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.579456 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/util/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.595934 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.602028 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.768565 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/util/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.776584 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/extract/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.780639 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gvrbk_c1790072-7e58-47b3-8895-51dade41bbf1/pull/0.log" Feb 19 11:03:48 crc kubenswrapper[4780]: I0219 11:03:48.987776 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-utilities/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.150919 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-content/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.162483 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-content/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.172822 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-utilities/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.395439 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-utilities/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.423113 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/extract-content/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.635079 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-utilities/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.722030 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lscsw_0cc4d5d4-5cfc-43b9-8abe-59fd6e9f062b/registry-server/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.846979 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-utilities/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.891765 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-content/0.log" Feb 19 11:03:49 crc kubenswrapper[4780]: I0219 11:03:49.918292 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-content/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.129710 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-content/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.140238 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/extract-utilities/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.375246 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/util/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.596975 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/pull/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.632094 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/pull/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.697080 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/util/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.844274 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/util/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.865862 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/pull/0.log" Feb 19 11:03:50 crc kubenswrapper[4780]: I0219 11:03:50.904904 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqslpq_8eabe830-a7df-46f3-840e-5585eae95c5a/extract/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.130896 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-blt9s_3bc96cb9-c467-4da1-8aef-8ca0ef0889a4/marketplace-operator/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.194677 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-utilities/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.396677 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-utilities/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.497923 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f9h8d_a253deca-98c7-4fa6-a416-c2f951e824f0/registry-server/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.578488 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-content/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.614565 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-content/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.744993 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-utilities/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.759692 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/extract-content/0.log" Feb 19 11:03:51 crc kubenswrapper[4780]: I0219 11:03:51.947400 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-utilities/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.075371 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmbhx_c95adfb8-f3a6-4b61-a38d-258dc4528d47/registry-server/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.165478 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-content/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.178419 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-utilities/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.245718 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-content/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.437560 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-content/0.log" Feb 19 11:03:52 crc kubenswrapper[4780]: I0219 11:03:52.559722 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/extract-utilities/0.log" Feb 19 11:03:53 crc kubenswrapper[4780]: I0219 11:03:53.837273 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c7svd_5b287375-fa66-41f5-a4a6-d5b540e56b4b/registry-server/0.log" Feb 19 11:04:06 crc kubenswrapper[4780]: I0219 11:04:06.335895 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:04:06 crc kubenswrapper[4780]: I0219 11:04:06.336479 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:04:06 crc kubenswrapper[4780]: I0219 11:04:06.924207 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mjrgp_43b00acb-5aa9-4e89-8eaf-43217205623b/prometheus-operator/0.log" Feb 19 11:04:06 crc kubenswrapper[4780]: I0219 11:04:06.982248 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76f9b44b77-ldtqm_def2ec98-a720-44e2-b7ae-e4b917a073e0/prometheus-operator-admission-webhook/0.log" Feb 19 11:04:06 crc kubenswrapper[4780]: I0219 11:04:06.989551 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76f9b44b77-pcg9g_9d5cb621-e571-4a9c-b564-f9ce5b07295f/prometheus-operator-admission-webhook/0.log" Feb 19 11:04:07 crc kubenswrapper[4780]: I0219 11:04:07.139149 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4fxhv_df4e54f0-d1df-44af-ba63-0e8a6791a6d0/operator/0.log" Feb 19 11:04:07 crc kubenswrapper[4780]: I0219 11:04:07.142648 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nx8q8_f65a5e3a-d2c6-42fd-82fa-f9e8d3f3ae1a/perses-operator/0.log" Feb 19 11:04:20 crc kubenswrapper[4780]: E0219 11:04:20.013794 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:45986->38.102.83.103:37621: write tcp 38.102.83.103:45986->38.102.83.103:37621: write: broken pipe Feb 19 11:04:22 crc kubenswrapper[4780]: E0219 11:04:22.997586 4780 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.103:46066->38.102.83.103:37621: read tcp 38.102.83.103:46066->38.102.83.103:37621: read: connection reset by peer Feb 19 11:04:29 crc kubenswrapper[4780]: E0219 11:04:29.243760 4780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:43406->38.102.83.103:37621: write tcp 38.102.83.103:43406->38.102.83.103:37621: write: connection reset by peer Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.336568 4780 patch_prober.go:28] interesting pod/machine-config-daemon-rw5ts container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.337441 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.337494 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.340249 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a"} pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.340327 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" containerName="machine-config-daemon" containerID="cri-o://6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" gracePeriod=600 Feb 19 11:04:36 crc kubenswrapper[4780]: E0219 11:04:36.474642 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.653852 4780 generic.go:334] "Generic (PLEG): container finished" podID="920aa359-8647-440a-842e-066313c39414" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" exitCode=0 Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.653945 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerDied","Data":"6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a"} Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.654699 4780 scope.go:117] "RemoveContainer" containerID="41240fdb7cc7aac53ceda60c88994d134b1b2c646766972687361b786e0ff0fc" Feb 19 11:04:36 crc kubenswrapper[4780]: I0219 11:04:36.655735 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:04:36 crc kubenswrapper[4780]: E0219 11:04:36.656074 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.790878 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:42 crc kubenswrapper[4780]: E0219 11:04:42.791990 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="extract-content" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.792005 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="extract-content" Feb 19 11:04:42 crc kubenswrapper[4780]: E0219 11:04:42.792023 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="extract-utilities" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.792030 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="extract-utilities" Feb 19 11:04:42 crc kubenswrapper[4780]: E0219 11:04:42.792077 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="registry-server" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.792084 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="registry-server" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.792308 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="115b6378-004d-4ad9-96b2-afd59731c76d" containerName="registry-server" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.794481 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.807736 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.885573 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhqb\" (UniqueName: \"kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.886180 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.886229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.988357 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqhqb\" (UniqueName: \"kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.988484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.988539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.989265 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:42 crc kubenswrapper[4780]: I0219 11:04:42.989308 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:43 crc kubenswrapper[4780]: I0219 11:04:43.016299 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqhqb\" (UniqueName: \"kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb\") pod \"redhat-marketplace-4r9qn\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:43 crc kubenswrapper[4780]: I0219 11:04:43.156159 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:44 crc kubenswrapper[4780]: I0219 11:04:44.167867 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:44 crc kubenswrapper[4780]: I0219 11:04:44.781681 4780 generic.go:334] "Generic (PLEG): container finished" podID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerID="a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84" exitCode=0 Feb 19 11:04:44 crc kubenswrapper[4780]: I0219 11:04:44.781749 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerDied","Data":"a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84"} Feb 19 11:04:44 crc kubenswrapper[4780]: I0219 11:04:44.781780 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerStarted","Data":"4c69355d7a1e5d62c1918e2860e81511a11e6adb0dc251e27253ea68f4ee4a06"} Feb 19 11:04:44 crc kubenswrapper[4780]: I0219 11:04:44.785222 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:04:45 crc kubenswrapper[4780]: I0219 11:04:45.802801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerStarted","Data":"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d"} Feb 19 11:04:46 crc kubenswrapper[4780]: I0219 11:04:46.812215 4780 generic.go:334] "Generic (PLEG): container finished" podID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerID="b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d" exitCode=0 Feb 19 11:04:46 crc kubenswrapper[4780]: I0219 11:04:46.812524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerDied","Data":"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d"} Feb 19 11:04:46 crc kubenswrapper[4780]: I0219 11:04:46.938318 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:04:46 crc kubenswrapper[4780]: E0219 11:04:46.938669 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:04:47 crc kubenswrapper[4780]: I0219 11:04:47.827251 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerStarted","Data":"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc"} Feb 19 11:04:47 crc kubenswrapper[4780]: I0219 11:04:47.870047 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4r9qn" podStartSLOduration=3.243779439 podStartE2EDuration="5.870006557s" podCreationTimestamp="2026-02-19 11:04:42 +0000 UTC" firstStartedPulling="2026-02-19 11:04:44.784692095 +0000 UTC m=+9827.528349544" lastFinishedPulling="2026-02-19 11:04:47.410919213 +0000 UTC m=+9830.154576662" observedRunningTime="2026-02-19 11:04:47.859111704 +0000 UTC m=+9830.602769173" watchObservedRunningTime="2026-02-19 11:04:47.870006557 +0000 UTC m=+9830.613664006" Feb 19 11:04:53 crc kubenswrapper[4780]: I0219 11:04:53.157274 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:53 crc kubenswrapper[4780]: I0219 11:04:53.158808 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:53 crc kubenswrapper[4780]: I0219 11:04:53.255739 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:53 crc kubenswrapper[4780]: I0219 11:04:53.967143 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:54 crc kubenswrapper[4780]: I0219 11:04:54.017680 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:55 crc kubenswrapper[4780]: I0219 11:04:55.933648 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4r9qn" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="registry-server" containerID="cri-o://590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc" gracePeriod=2 Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.476114 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.514546 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content\") pod \"e8ec8140-d26a-4dfd-91e1-9894fd388121\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.515039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities\") pod \"e8ec8140-d26a-4dfd-91e1-9894fd388121\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.515413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqhqb\" (UniqueName: \"kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb\") pod \"e8ec8140-d26a-4dfd-91e1-9894fd388121\" (UID: \"e8ec8140-d26a-4dfd-91e1-9894fd388121\") " Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.517869 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities" (OuterVolumeSpecName: "utilities") pod "e8ec8140-d26a-4dfd-91e1-9894fd388121" (UID: "e8ec8140-d26a-4dfd-91e1-9894fd388121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.546371 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ec8140-d26a-4dfd-91e1-9894fd388121" (UID: "e8ec8140-d26a-4dfd-91e1-9894fd388121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.618142 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.618180 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ec8140-d26a-4dfd-91e1-9894fd388121-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.969587 4780 generic.go:334] "Generic (PLEG): container finished" podID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerID="590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc" exitCode=0 Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.969737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerDied","Data":"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc"} Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.969969 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r9qn" Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.970229 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r9qn" event={"ID":"e8ec8140-d26a-4dfd-91e1-9894fd388121","Type":"ContainerDied","Data":"4c69355d7a1e5d62c1918e2860e81511a11e6adb0dc251e27253ea68f4ee4a06"} Feb 19 11:04:56 crc kubenswrapper[4780]: I0219 11:04:56.970260 4780 scope.go:117] "RemoveContainer" containerID="590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.005779 4780 scope.go:117] "RemoveContainer" containerID="b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.169829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb" (OuterVolumeSpecName: "kube-api-access-jqhqb") pod "e8ec8140-d26a-4dfd-91e1-9894fd388121" (UID: "e8ec8140-d26a-4dfd-91e1-9894fd388121"). InnerVolumeSpecName "kube-api-access-jqhqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.190387 4780 scope.go:117] "RemoveContainer" containerID="a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.246226 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqhqb\" (UniqueName: \"kubernetes.io/projected/e8ec8140-d26a-4dfd-91e1-9894fd388121-kube-api-access-jqhqb\") on node \"crc\" DevicePath \"\"" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.500647 4780 scope.go:117] "RemoveContainer" containerID="590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc" Feb 19 11:04:57 crc kubenswrapper[4780]: E0219 11:04:57.501108 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc\": container with ID starting with 590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc not found: ID does not exist" containerID="590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.501321 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc"} err="failed to get container status \"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc\": rpc error: code = NotFound desc = could not find container \"590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc\": container with ID starting with 590f69f9e4b2511bb9d82da33ca6ac308f83eb921419ebeed2b12dfc3f5c51cc not found: ID does not exist" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.501365 4780 scope.go:117] "RemoveContainer" containerID="b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d" Feb 19 11:04:57 crc kubenswrapper[4780]: E0219 11:04:57.502110 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d\": container with ID starting with b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d not found: ID does not exist" containerID="b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.502172 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d"} err="failed to get container status \"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d\": rpc error: code = NotFound desc = could not find container \"b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d\": container with ID starting with b2eee5e31bf0ac782819a4dd4d810de8a2eca6062ea30882d0f4c2576eead05d not found: ID does not exist" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.502230 4780 scope.go:117] "RemoveContainer" containerID="a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84" Feb 19 11:04:57 crc kubenswrapper[4780]: E0219 11:04:57.502713 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84\": container with ID starting with a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84 not found: ID does not exist" containerID="a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.502741 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84"} err="failed to get container status \"a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84\": rpc error: code = NotFound desc = could not find container \"a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84\": container with ID starting with a99748686b07a06d6960e23757409ad9fe5a8ec31db2fd5763c6d1213b54ac84 not found: ID does not exist" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.580549 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.596905 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r9qn"] Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.947568 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:04:57 crc kubenswrapper[4780]: E0219 11:04:57.948261 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:04:57 crc kubenswrapper[4780]: I0219 11:04:57.972909 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" path="/var/lib/kubelet/pods/e8ec8140-d26a-4dfd-91e1-9894fd388121/volumes" Feb 19 11:05:12 crc kubenswrapper[4780]: I0219 11:05:12.938817 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:05:12 crc kubenswrapper[4780]: E0219 11:05:12.939620 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:05:27 crc kubenswrapper[4780]: I0219 11:05:27.946383 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:05:27 crc kubenswrapper[4780]: E0219 11:05:27.949024 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:05:41 crc kubenswrapper[4780]: I0219 11:05:41.939052 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:05:41 crc kubenswrapper[4780]: E0219 11:05:41.939891 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:05:53 crc kubenswrapper[4780]: I0219 11:05:53.938967 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:05:53 crc kubenswrapper[4780]: E0219 11:05:53.939806 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:05 crc kubenswrapper[4780]: I0219 11:06:05.938412 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:06:05 crc kubenswrapper[4780]: E0219 11:06:05.939396 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:19 crc kubenswrapper[4780]: I0219 11:06:19.939434 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:06:19 crc kubenswrapper[4780]: E0219 11:06:19.940214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:25 crc kubenswrapper[4780]: I0219 11:06:25.965625 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5e9beab-82b3-4cde-af54-4e6db119bf2c" containerID="4f1a22a576bcaa5a70f07368bf1dcd1930a11a79b28ac86d10ab484fd3898819" exitCode=0 Feb 19 11:06:25 crc kubenswrapper[4780]: I0219 11:06:25.966248 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bprf7/must-gather-s9cd6" event={"ID":"a5e9beab-82b3-4cde-af54-4e6db119bf2c","Type":"ContainerDied","Data":"4f1a22a576bcaa5a70f07368bf1dcd1930a11a79b28ac86d10ab484fd3898819"} Feb 19 11:06:25 crc kubenswrapper[4780]: I0219 11:06:25.968800 4780 scope.go:117] "RemoveContainer" containerID="4f1a22a576bcaa5a70f07368bf1dcd1930a11a79b28ac86d10ab484fd3898819" Feb 19 11:06:26 crc kubenswrapper[4780]: I0219 11:06:26.858884 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bprf7_must-gather-s9cd6_a5e9beab-82b3-4cde-af54-4e6db119bf2c/gather/0.log" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.255994 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:30 crc kubenswrapper[4780]: E0219 11:06:30.257180 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="extract-content" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.257194 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="extract-content" Feb 19 11:06:30 crc kubenswrapper[4780]: E0219 11:06:30.257238 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="registry-server" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.257244 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="registry-server" Feb 19 11:06:30 crc kubenswrapper[4780]: E0219 11:06:30.257267 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="extract-utilities" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.257276 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="extract-utilities" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.257496 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ec8140-d26a-4dfd-91e1-9894fd388121" containerName="registry-server" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.259501 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.278972 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.316579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqcr\" (UniqueName: \"kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.316683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.317051 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.419603 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgqcr\" (UniqueName: \"kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.419679 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.419829 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.420498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.420540 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.451204 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgqcr\" (UniqueName: \"kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr\") pod \"community-operators-c9llt\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:30 crc kubenswrapper[4780]: I0219 11:06:30.604391 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:31 crc kubenswrapper[4780]: I0219 11:06:31.381594 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:32 crc kubenswrapper[4780]: I0219 11:06:32.038980 4780 generic.go:334] "Generic (PLEG): container finished" podID="25f085a5-e3f8-41a6-9a11-87200d838f16" containerID="9ef5e516bf893b961fbd1aaf06dbc5f84fd8b03436adb916363a743f6d0190b6" exitCode=0 Feb 19 11:06:32 crc kubenswrapper[4780]: I0219 11:06:32.039291 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerDied","Data":"9ef5e516bf893b961fbd1aaf06dbc5f84fd8b03436adb916363a743f6d0190b6"} Feb 19 11:06:32 crc kubenswrapper[4780]: I0219 11:06:32.039440 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerStarted","Data":"b905829166844e5742b96f0e53d2fd13564a9d19a9ed4fbc6b0f23e675080ab2"} Feb 19 11:06:33 crc kubenswrapper[4780]: I0219 11:06:33.050665 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerStarted","Data":"d85fdbad5c893d0ac6ffe055cbd0cd9b39ad07212a81817ccbe5d1d2d8721f66"} Feb 19 11:06:33 crc kubenswrapper[4780]: I0219 11:06:33.848940 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:33 crc kubenswrapper[4780]: I0219 11:06:33.851986 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:33 crc kubenswrapper[4780]: I0219 11:06:33.862880 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:33 crc kubenswrapper[4780]: I0219 11:06:33.939248 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:06:33 crc kubenswrapper[4780]: E0219 11:06:33.939526 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.044715 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.044793 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.045624 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwf2\" (UniqueName: \"kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.148223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwf2\" (UniqueName: \"kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.148463 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.148539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.149056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.149142 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.179302 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwf2\" (UniqueName: \"kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2\") pod \"redhat-operators-tbl44\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.219942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:34 crc kubenswrapper[4780]: I0219 11:06:34.765164 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:35 crc kubenswrapper[4780]: I0219 11:06:35.081435 4780 generic.go:334] "Generic (PLEG): container finished" podID="25f085a5-e3f8-41a6-9a11-87200d838f16" containerID="d85fdbad5c893d0ac6ffe055cbd0cd9b39ad07212a81817ccbe5d1d2d8721f66" exitCode=0 Feb 19 11:06:35 crc kubenswrapper[4780]: I0219 11:06:35.081663 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerDied","Data":"d85fdbad5c893d0ac6ffe055cbd0cd9b39ad07212a81817ccbe5d1d2d8721f66"} Feb 19 11:06:35 crc kubenswrapper[4780]: I0219 11:06:35.094825 4780 generic.go:334] "Generic (PLEG): container finished" podID="057dc823-1b15-455c-b2a2-c4f5b31ff29b" containerID="b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583" exitCode=0 Feb 19 11:06:35 crc kubenswrapper[4780]: I0219 11:06:35.094885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerDied","Data":"b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583"} Feb 19 11:06:35 crc kubenswrapper[4780]: I0219 11:06:35.094923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerStarted","Data":"a764316b52d20718c8c4911fb505f73a006abef6ba981a7469b08ab652d6712e"} Feb 19 11:06:36 crc kubenswrapper[4780]: I0219 11:06:36.118299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerStarted","Data":"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7"} Feb 19 11:06:36 crc kubenswrapper[4780]: I0219 11:06:36.137830 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerStarted","Data":"5450c8ef7d5107a1715f8ba9b58417efe6bd22c2925b7ea66b2575db7bc15979"} Feb 19 11:06:36 crc kubenswrapper[4780]: I0219 11:06:36.221655 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c9llt" podStartSLOduration=2.77238631 podStartE2EDuration="6.221626351s" podCreationTimestamp="2026-02-19 11:06:30 +0000 UTC" firstStartedPulling="2026-02-19 11:06:32.042229155 +0000 UTC m=+9934.785886604" lastFinishedPulling="2026-02-19 11:06:35.491469196 +0000 UTC m=+9938.235126645" observedRunningTime="2026-02-19 11:06:36.205690062 +0000 UTC m=+9938.949347511" watchObservedRunningTime="2026-02-19 11:06:36.221626351 +0000 UTC m=+9938.965283810" Feb 19 11:06:37 crc kubenswrapper[4780]: I0219 11:06:37.909402 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bprf7/must-gather-s9cd6"] Feb 19 11:06:37 crc kubenswrapper[4780]: I0219 11:06:37.909715 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bprf7/must-gather-s9cd6" podUID="a5e9beab-82b3-4cde-af54-4e6db119bf2c" containerName="copy" containerID="cri-o://ea82b93a172f70eb31928fe8a9c77599f69cabff3ef3a249a60ae0c0321046bd" gracePeriod=2 Feb 19 11:06:37 crc kubenswrapper[4780]: I0219 11:06:37.959863 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bprf7/must-gather-s9cd6"] Feb 19 11:06:38 crc kubenswrapper[4780]: I0219 11:06:38.164814 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bprf7_must-gather-s9cd6_a5e9beab-82b3-4cde-af54-4e6db119bf2c/copy/0.log" Feb 19 11:06:38 crc kubenswrapper[4780]: I0219 11:06:38.165351 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5e9beab-82b3-4cde-af54-4e6db119bf2c" containerID="ea82b93a172f70eb31928fe8a9c77599f69cabff3ef3a249a60ae0c0321046bd" exitCode=143 Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.706305 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bprf7_must-gather-s9cd6_a5e9beab-82b3-4cde-af54-4e6db119bf2c/copy/0.log" Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.709057 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.758817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output\") pod \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.758999 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t4kf\" (UniqueName: \"kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf\") pod \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\" (UID: \"a5e9beab-82b3-4cde-af54-4e6db119bf2c\") " Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.774541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf" (OuterVolumeSpecName: "kube-api-access-2t4kf") pod "a5e9beab-82b3-4cde-af54-4e6db119bf2c" (UID: "a5e9beab-82b3-4cde-af54-4e6db119bf2c"). InnerVolumeSpecName "kube-api-access-2t4kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.863440 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t4kf\" (UniqueName: \"kubernetes.io/projected/a5e9beab-82b3-4cde-af54-4e6db119bf2c-kube-api-access-2t4kf\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:39 crc kubenswrapper[4780]: I0219 11:06:39.990760 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a5e9beab-82b3-4cde-af54-4e6db119bf2c" (UID: "a5e9beab-82b3-4cde-af54-4e6db119bf2c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.073957 4780 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5e9beab-82b3-4cde-af54-4e6db119bf2c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.215107 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bprf7_must-gather-s9cd6_a5e9beab-82b3-4cde-af54-4e6db119bf2c/copy/0.log" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.215873 4780 scope.go:117] "RemoveContainer" containerID="ea82b93a172f70eb31928fe8a9c77599f69cabff3ef3a249a60ae0c0321046bd" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.215902 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bprf7/must-gather-s9cd6" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.248711 4780 scope.go:117] "RemoveContainer" containerID="4f1a22a576bcaa5a70f07368bf1dcd1930a11a79b28ac86d10ab484fd3898819" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.605549 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.605627 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:40 crc kubenswrapper[4780]: I0219 11:06:40.667277 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:41 crc kubenswrapper[4780]: I0219 11:06:41.344997 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:41 crc kubenswrapper[4780]: I0219 11:06:41.961882 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e9beab-82b3-4cde-af54-4e6db119bf2c" path="/var/lib/kubelet/pods/a5e9beab-82b3-4cde-af54-4e6db119bf2c/volumes" Feb 19 11:06:42 crc kubenswrapper[4780]: I0219 11:06:42.045354 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:43 crc kubenswrapper[4780]: I0219 11:06:43.251103 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c9llt" podUID="25f085a5-e3f8-41a6-9a11-87200d838f16" containerName="registry-server" containerID="cri-o://5450c8ef7d5107a1715f8ba9b58417efe6bd22c2925b7ea66b2575db7bc15979" gracePeriod=2 Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.268392 4780 generic.go:334] "Generic (PLEG): container finished" podID="25f085a5-e3f8-41a6-9a11-87200d838f16" containerID="5450c8ef7d5107a1715f8ba9b58417efe6bd22c2925b7ea66b2575db7bc15979" exitCode=0 Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.269414 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerDied","Data":"5450c8ef7d5107a1715f8ba9b58417efe6bd22c2925b7ea66b2575db7bc15979"} Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.269936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9llt" event={"ID":"25f085a5-e3f8-41a6-9a11-87200d838f16","Type":"ContainerDied","Data":"b905829166844e5742b96f0e53d2fd13564a9d19a9ed4fbc6b0f23e675080ab2"} Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.269987 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b905829166844e5742b96f0e53d2fd13564a9d19a9ed4fbc6b0f23e675080ab2" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.315492 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.402215 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgqcr\" (UniqueName: \"kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr\") pod \"25f085a5-e3f8-41a6-9a11-87200d838f16\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.402382 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities\") pod \"25f085a5-e3f8-41a6-9a11-87200d838f16\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.402502 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content\") pod \"25f085a5-e3f8-41a6-9a11-87200d838f16\" (UID: \"25f085a5-e3f8-41a6-9a11-87200d838f16\") " Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.404913 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities" (OuterVolumeSpecName: "utilities") pod "25f085a5-e3f8-41a6-9a11-87200d838f16" (UID: "25f085a5-e3f8-41a6-9a11-87200d838f16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.420706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr" (OuterVolumeSpecName: "kube-api-access-wgqcr") pod "25f085a5-e3f8-41a6-9a11-87200d838f16" (UID: "25f085a5-e3f8-41a6-9a11-87200d838f16"). InnerVolumeSpecName "kube-api-access-wgqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.506462 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgqcr\" (UniqueName: \"kubernetes.io/projected/25f085a5-e3f8-41a6-9a11-87200d838f16-kube-api-access-wgqcr\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.506511 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.569462 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f085a5-e3f8-41a6-9a11-87200d838f16" (UID: "25f085a5-e3f8-41a6-9a11-87200d838f16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:44 crc kubenswrapper[4780]: I0219 11:06:44.609432 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f085a5-e3f8-41a6-9a11-87200d838f16-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.290666 4780 generic.go:334] "Generic (PLEG): container finished" podID="057dc823-1b15-455c-b2a2-c4f5b31ff29b" containerID="2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7" exitCode=0 Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.290756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerDied","Data":"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7"} Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.291458 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9llt" Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.367004 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.380870 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c9llt"] Feb 19 11:06:45 crc kubenswrapper[4780]: I0219 11:06:45.960635 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f085a5-e3f8-41a6-9a11-87200d838f16" path="/var/lib/kubelet/pods/25f085a5-e3f8-41a6-9a11-87200d838f16/volumes" Feb 19 11:06:46 crc kubenswrapper[4780]: I0219 11:06:46.311064 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerStarted","Data":"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a"} Feb 19 11:06:46 crc kubenswrapper[4780]: I0219 11:06:46.339761 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbl44" podStartSLOduration=2.619539885 podStartE2EDuration="13.33973897s" podCreationTimestamp="2026-02-19 11:06:33 +0000 UTC" firstStartedPulling="2026-02-19 11:06:35.096670873 +0000 UTC m=+9937.840328322" lastFinishedPulling="2026-02-19 11:06:45.816869958 +0000 UTC m=+9948.560527407" observedRunningTime="2026-02-19 11:06:46.331748939 +0000 UTC m=+9949.075406388" watchObservedRunningTime="2026-02-19 11:06:46.33973897 +0000 UTC m=+9949.083396419" Feb 19 11:06:46 crc kubenswrapper[4780]: I0219 11:06:46.939980 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:06:46 crc kubenswrapper[4780]: E0219 11:06:46.941054 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:54 crc kubenswrapper[4780]: I0219 11:06:54.221620 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:54 crc kubenswrapper[4780]: I0219 11:06:54.223384 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:54 crc kubenswrapper[4780]: I0219 11:06:54.272755 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:54 crc kubenswrapper[4780]: I0219 11:06:54.464115 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:57 crc kubenswrapper[4780]: I0219 11:06:57.295048 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:57 crc kubenswrapper[4780]: I0219 11:06:57.431828 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbl44" podUID="057dc823-1b15-455c-b2a2-c4f5b31ff29b" containerName="registry-server" containerID="cri-o://3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a" gracePeriod=2 Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.115971 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.178559 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities\") pod \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.178661 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content\") pod \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.178748 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwf2\" (UniqueName: \"kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2\") pod \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\" (UID: \"057dc823-1b15-455c-b2a2-c4f5b31ff29b\") " Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.180178 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities" (OuterVolumeSpecName: "utilities") pod "057dc823-1b15-455c-b2a2-c4f5b31ff29b" (UID: "057dc823-1b15-455c-b2a2-c4f5b31ff29b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.186488 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2" (OuterVolumeSpecName: "kube-api-access-8pwf2") pod "057dc823-1b15-455c-b2a2-c4f5b31ff29b" (UID: "057dc823-1b15-455c-b2a2-c4f5b31ff29b"). InnerVolumeSpecName "kube-api-access-8pwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.282531 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwf2\" (UniqueName: \"kubernetes.io/projected/057dc823-1b15-455c-b2a2-c4f5b31ff29b-kube-api-access-8pwf2\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.282823 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.359260 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "057dc823-1b15-455c-b2a2-c4f5b31ff29b" (UID: "057dc823-1b15-455c-b2a2-c4f5b31ff29b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.385870 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057dc823-1b15-455c-b2a2-c4f5b31ff29b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.445367 4780 generic.go:334] "Generic (PLEG): container finished" podID="057dc823-1b15-455c-b2a2-c4f5b31ff29b" containerID="3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a" exitCode=0 Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.445487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerDied","Data":"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a"} Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.445809 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbl44" event={"ID":"057dc823-1b15-455c-b2a2-c4f5b31ff29b","Type":"ContainerDied","Data":"a764316b52d20718c8c4911fb505f73a006abef6ba981a7469b08ab652d6712e"} Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.445961 4780 scope.go:117] "RemoveContainer" containerID="3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.445548 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbl44" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.511368 4780 scope.go:117] "RemoveContainer" containerID="2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.518220 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.540843 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbl44"] Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.654884 4780 scope.go:117] "RemoveContainer" containerID="b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.829243 4780 scope.go:117] "RemoveContainer" containerID="3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a" Feb 19 11:06:58 crc kubenswrapper[4780]: E0219 11:06:58.829742 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a\": container with ID starting with 3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a not found: ID does not exist" containerID="3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.829800 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a"} err="failed to get container status \"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a\": rpc error: code = NotFound desc = could not find container \"3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a\": container with ID starting with 3533d71b261b7392da12ba7b927057aa502ec96c3a1762fd9eaae9ea0943958a not found: ID does not exist" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.829836 4780 scope.go:117] "RemoveContainer" containerID="2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7" Feb 19 11:06:58 crc kubenswrapper[4780]: E0219 11:06:58.830211 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7\": container with ID starting with 2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7 not found: ID does not exist" containerID="2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.830256 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7"} err="failed to get container status \"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7\": rpc error: code = NotFound desc = could not find container \"2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7\": container with ID starting with 2cf7a10266d97b60dfa920236f8cc7e5f829cd1177f787fdc712df21ddb644f7 not found: ID does not exist" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.830287 4780 scope.go:117] "RemoveContainer" containerID="b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583" Feb 19 11:06:58 crc kubenswrapper[4780]: E0219 11:06:58.830509 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583\": container with ID starting with b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583 not found: ID does not exist" containerID="b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583" Feb 19 11:06:58 crc kubenswrapper[4780]: I0219 11:06:58.830541 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583"} err="failed to get container status \"b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583\": rpc error: code = NotFound desc = could not find container \"b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583\": container with ID starting with b62f371d3f9ab5c3079377e788f8b9921d84ab82768c5951316162539f762583 not found: ID does not exist" Feb 19 11:06:59 crc kubenswrapper[4780]: I0219 11:06:59.944986 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:06:59 crc kubenswrapper[4780]: E0219 11:06:59.945532 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:06:59 crc kubenswrapper[4780]: I0219 11:06:59.954873 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057dc823-1b15-455c-b2a2-c4f5b31ff29b" path="/var/lib/kubelet/pods/057dc823-1b15-455c-b2a2-c4f5b31ff29b/volumes" Feb 19 11:07:12 crc kubenswrapper[4780]: I0219 11:07:12.939236 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:07:12 crc kubenswrapper[4780]: E0219 11:07:12.940403 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:07:25 crc kubenswrapper[4780]: I0219 11:07:25.944930 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:07:25 crc kubenswrapper[4780]: E0219 11:07:25.946322 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:07:36 crc kubenswrapper[4780]: I0219 11:07:36.938775 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:07:36 crc kubenswrapper[4780]: E0219 11:07:36.939766 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:07:49 crc kubenswrapper[4780]: I0219 11:07:49.939495 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:07:49 crc kubenswrapper[4780]: E0219 11:07:49.940443 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:00 crc kubenswrapper[4780]: I0219 11:08:00.938675 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:00 crc kubenswrapper[4780]: E0219 11:08:00.939405 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:12 crc kubenswrapper[4780]: I0219 11:08:12.938406 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:12 crc kubenswrapper[4780]: E0219 11:08:12.941812 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:23 crc kubenswrapper[4780]: I0219 11:08:23.939422 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:23 crc kubenswrapper[4780]: E0219 11:08:23.940244 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:36 crc kubenswrapper[4780]: I0219 11:08:36.939151 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:36 crc kubenswrapper[4780]: E0219 11:08:36.939839 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:48 crc kubenswrapper[4780]: I0219 11:08:48.938820 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:48 crc kubenswrapper[4780]: E0219 11:08:48.939692 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:08:59 crc kubenswrapper[4780]: I0219 11:08:59.938063 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:08:59 crc kubenswrapper[4780]: E0219 11:08:59.940531 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:09:13 crc kubenswrapper[4780]: I0219 11:09:13.939219 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:09:13 crc kubenswrapper[4780]: E0219 11:09:13.939987 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:09:24 crc kubenswrapper[4780]: I0219 11:09:24.941407 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:09:24 crc kubenswrapper[4780]: E0219 11:09:24.942783 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw5ts_openshift-machine-config-operator(920aa359-8647-440a-842e-066313c39414)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" podUID="920aa359-8647-440a-842e-066313c39414" Feb 19 11:09:38 crc kubenswrapper[4780]: I0219 11:09:38.938624 4780 scope.go:117] "RemoveContainer" containerID="6ccad37f00a250c5dc8fc9123f7db11108330123190fa96080a5938cf9b5091a" Feb 19 11:09:39 crc kubenswrapper[4780]: I0219 11:09:39.303990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw5ts" event={"ID":"920aa359-8647-440a-842e-066313c39414","Type":"ContainerStarted","Data":"098dae540d7b62bd69b62ec156b14ce1f0149eefed43ef4a462272b78bba3669"}